Roudebush Hall
Roudebush Hall, home of Miami's administrative offices

Course Evaluation Policies and FAQs

Each academic department or program is required to develop a teaching evaluation plan with the aim of improving the quality of instruction and student learning (MUPIM 7.2.B). These plans should include formal evaluation of courses that can be used for both self-improvement and summative assessment (MUPIM 7.2.C).

Beginning in 2013, all formal course evaluations were administered via an online platform.

Policies and Information

Below are key points that should be considered in administering course evaluations to ensure credibility, integrity and instructional improvement:

  1. Instructors should not administer their own evaluations.
  2. Course evaluations will close prior to the start of final exams.
  3. Evaluation results will be available to the instructor after final grades for the semester or term have been submitted.
  4. Each evaluation form includes a common set of six University-level questions, a set of common divisional questions, and a set of questions determined by the department/program. Instructors have the option to add questions to their course evaluation near the beginning of each semester. Once the evaluation period opens, no additional questions may be added.
  5. Independent studies, research, field experience courses and courses with total enrollments of fewer than five (5) students are exempt.
  6. In cross-listed courses, the evaluation form administered will be the one for the “subject code” (of the department/program) in which the student is enrolled. Take the example of ISA/STA 361. Students enrolled in the ISA 361 portion of the course will receive the form that includes the ISA department questions, whereas students enrolled in the STA 361 portion of the course will receive the form that includes the STA department questions.
  7. In cross-listed courses, the “fewer than five student” exemption relates to the total course enrollment (inclusive of all subject codes related to the course). In the example of ISA/STA 361, the exemption would only apply if the total number of students enrolled in ISA and STA 361 is fewer than five.
  8. All instructors (primary and secondary) will be evaluated in team-taught courses. Note: Because the primary instructor is the first instructor listed in Banner, this policy could result in “supervising” faculty being evaluated along with those instructors who may be facilitating sections of the course.
  9. Students enrolled in sections with the same course title that meet on the same day(s), at the same time(s), and in the same room will be given the same evaluation form that combines all of the questions.

Note: Many classes consist of multiple contact times which may or may not have the same instructor. For example, a large lecture course taught by Dr. Wong may have multiple lab sections with different instructors (Professor Smith, Taylor and Jones) and meeting times. Because departments do not assign CRNs in a consistent way in this situation, it is difficult to identify the “unit of observation” for digital evaluations.


Frequently Asked Questions

Online course evaluations are cost-effective, decrease staff workload, lower the margin for error, preserve class time that would otherwise be spent on in-class evaluations, and allow for quick feedback. Nevertheless, administering course evaluations online versus by paper represents a significant change in our process of evaluation.  Below are responses to frequently asked questions:

What are the all-University questions on the course evaluation form?

The scale for ALL questions is 0-4, with higher numbers being better.

Classroom Climate

  • My instructor welcomed students' questions.
  • My instructor offered opportunities for active participation to understand course content.
  • My instructor demonstrated concern for student learning.

Student Learning

  • In this course I learned to analyze complex problems or think about complex issues.
  • My appreciation for this topic has increased as a result of this course.
  • I have gained an understanding of this course material
What are the divisionally specific questions on the course evaluation form?

Consult your academic dean’s office or see the Center for the Enhancement of Learning, Teaching and University Assessment (CELTUA)’s website for division questions.

Are there any guidelines for developing department- or instructor-specific questions?

Consult the CELTUA website for more information.

Are digitally administered evaluations as accurate as paper evaluations?

Burton et al. (2012) found that out of eighteen studies exploring the differences between quantitative feedback provided on paper vs. online evaluations, fourteen of those studies reported no difference between the delivery methods and two reported slightly higher ratings online. In their own experiment, Burton et al. (2012) determined that online ratings were significantly higher than those collected on paper evaluations. Heath et al. (2007) found that online formats garner more positive and useful comments than comments offered via paper evaluations.

Will the quality of the qualitative feedback given online be as high as that offered in a paper version?

Most studies show a higher percentage of students who respond to evaluations given online include qualitative feedback (Donovan et al. 2006; Heath et al. 2007; Kasiar et al. 2002; Laubsch 2006). The amount of online qualitative feedback is also greater than that in the paper evaluations. In research analyzing word count, studies find that qualitative feedback from online evaluations exceeds that of paper evaluations, often by a wide margin (Burton et al. 2012; Heath et al. 2007; Kasiar et al. 2002; Hardy 2003; Hmieleski and Champagne 2000). Perhaps most importantly, several studies have examined the quality of the comments submitted through both formats (paper vs. online) and discovered that online comments were more substantive and informative, as defined by more words per comment, more descriptive text, and more detailed feedback (Ballantyne 2003; Burton et al. 2012; Collings and Ballantyne 2004; Donovan et al. 2006; Johnson 2002).

Will allowing “absentee” students to participate in online evaluations lower the course evaluation scores?

According to Perrett (2013), course and instructor ratings are not related to student attendance. In addition, students with a higher cumulative GPA and higher SAT scores complete online evaluations at higher rates than students with poor GPAs and lower SAT scores (Thorpe 2002). Students expecting higher grades also evaluate at a higher rate (Adams and Umbach 2012). Finally, students expecting poor grades in a class are no more likely to score an instructor below the class mean than students expecting good grades (Avery et. al. 2006; and Thorpe 2002).

Although these studies may help to alleviate concerns, it is also important to remember that there may be value in gaining feedback from students who attend the course meetings infrequently. Knowing why a student stopped engaging with a course may provide insights into ways of improving the course in the future.

Is the return rate of online evaluations the same as those administered on paper?

Studies comparing response rates of online vs. paper evaluation find that online evaluations generally have 9-10% lower response rates than do paper evaluations. Adding incentives can boost response rates by 7 – 25%, depending upon which incentives or interventions are used, from 7-25% (Ravenscroft & Enyeart, 2009; Norris & Conn, 2005; Johnson, 2002).

Response rates increase when faculty make it a point to let their students know how to find the evaluations, that the students’ comments are valued, and how the data is used overall.  See below for more suggestions on how to raise response rates.

Will online response rates be high enough to have statistical validity?

Nulty (2008) used and justified an 80% confidence interval for his calculations, and through a vast number of assumptions and corrections for bias, states that classes with fewer than 20 students need a minimum of a 58% response rate to be considered valid. Courses with greater than 50 enrollees can use 35% as their bar. Since instituting online evaluations, Miami’s average response rate has exceeded these thresholds. 

How can the response rates for online evaluations be improved?

Studies show that the course instructor can have the biggest influence on raising response rates.  Studies show that many students believe that faculty do not take evaluations seriously, and do not make changes as a result of the students’ reviews (Marlin, 1987; Nasser & Fresco, 2002; Spencer & Schmelkin, 2002). In fact, when asked, very few instructors report having made changes in direct response to student evaluation input (Beran & Rokosh, 2009).

Taking time to educate the students on how evaluations are used and to emphasize to students that their input will be taken seriously will have a positive effect on response rates (Gaillard et. al., 2006). Constructive, informative, and encouraging instructor-student engagement around the course evaluation process is also critical in maintaining or improving response rates (Norris & Conn, 2005; Johnson, 2002; Anderson et. al., 2006; Ballantyne, 2003).

Below are some other tips for improving response rates.  Select those that work better with your teaching philosophy and personal style of working with students:

  • Remind students about the evaluation two to three weeks before the semester or term ends. One study (Norris & Conn, 2005) found an increase in student response rates when students were given an early notification that evaluations were approaching. A reminder at around 2 to 3 weeks before the term ended was found to be ideal, raising response rates an average of 17 %.
  • Inform students how to complete the evaluations and where to find them. One study found that courses in which instructors demonstrated how to find and use the evaluations system had a 24% higher response rate than in courses with no demonstration given (Dommeyer et al, 2004).
  • Consider making the evaluation an assignment. Making an evaluation an assignment, even with no point value attached, raised response in one study by 7% (Johnson, 2002).
  • Emphasize the importance of evaluation: Students are more likely to complete course evaluations if they understand how they are being used, and believe their opinions matter (Gaillard et al, 2006). Explain how the University and you personally use the feedback. Chen and Hoshower (2003) found that students consider an improvement in teaching to be the most important outcome of an evaluation system, followed closely by an improvement in course content and format.
  • Ask students to bring their laptops to class and allow time in class to complete the evaluation.

References

Adams, M. and Umbach, P. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53: 576-591.

Anderson, J., Brown, G. & Spaeth, S. (2006). Online student evaluations and response rates reconsidered. Innovate, 2(6). Retrieved from http://www.innovateonline.info/index.php?view=article&id=301 

Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic Course Evaluations: Does an Online Delivery System Influence Student Evaluations? Journal of Economic Education, 37(1): 21-37.

Ballantyne, C.S. (2003). Online evaluations of teaching: An examination of current practice and considerations for the future. In D. L. Sorenson & T. D. Johnson (Eds.), New Directions for Teaching and Learning #96: Online students ratings of instruction (pp. 103-112). San Francisco, CA: Jossey-Bass.

Beran, T., & Rokosh, J. (2009). Instructors' perspectives on the utility of student ratings of instruction. Instructional Science, 37(2): 171-184.

Burton, W., A. Civitano, and P. Steiner-Grossman. 2012. Online versus paper evaluations: differences in both quantitative and qualitative data. Journal of Computing in Higher Education, 24(1): 58-69.

Chen, Y. & Hoshower, L.B. 2003. Student evaluation of teaching effectiveness: an assessment of student perception and motivation. Assessment and Evaluation in Higher Education, 28(1): 71-88.

Collings, D., & Ballantyne, C. (2004). Online student survey comments: A qualitative improvement? Paper presented at the 2004 Evaluation forum, Melbourne, Australia. Retrieved from http://our.murdoch.edu.au/Educational-Development/_document/Publications...

Dommeyer, C. J., Baum, P., Hanna, R. W., and Chapman, K. (2004). Gathering faculty teaching evaluations by in-class and online surveys: their effects on response rates and evaluations. Assessment and Evaluation in Higher Education, 29(5): 611-623.

Donovan, J., Mader, C. E., & Shinsky, J. (2006). Constructive student feedback: Online vs. Traditional course evaluations. Journal of Interactive Online Learning. 5(3), 283-295.

Gaillard, F., Mitchell, S, & Kavota, V. (2006). Students, Faculty, And Administrators’ Perception Of Students’ Evaluations Of Faculty In Higher Education Business Schools. Journal of College Teaching & Learning, 3(8): 77-90.

Hardy, N. (2003). Online ratings: fact and fiction. New Directions for Teaching and Learning, 96, 31-41. Retrieved fromhttp://www.google.com/url?sa=t&rct=j&q=northwestern%20course%20evaluatio...

Heath, N. M., Lawyer, S. R., & Rasmussen, E, B. (2007). A comparison of web-based versus pencil-and-paper course evaluations. Teaching Psychology, 34, 259-261. Retrieved from http://www.isu.edu/psych/fac_rasmussen.shtml

Hmieleski, K. & Champagne, M. V. (2000). Plugging in to course evaluation. The Technology Source Archives, Sept./Oct. Retrieved fromhttp://technologysource.org/article/plugging_in_to_course_evaluation/.

Johnson, T. (2002). Online student ratings: Will students respond? Paper presented at the annual meeting of the American Educational Research Association, New Orleans, 2002. Retrieved from http://www.armstrong.edu/images/institutional_research/onlinesurvey_will...

Kasiar, J. B., Schroeder, S. L. , & Holstad, S. G. (2002). Comparison of Traditional and Web-Based Course Evaluation Processes in a Required, Team-Taught Pharmacotherapy Course. American Journal of Pharmaceutical Education, 66: 268-270.

Laubsch, P. (2006). Online and in‐person evaluations: A literature review and exploratory comparison. Journal of Online Learning and Teaching, 2(2). Retrieved from http://jolt.merlot.org/Vol2_No2_Laubsch.htm

Layne B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Res Higher Education, 40:221-232.

Liegle, J. O., & McDonald, D. S. (2004, November 5). Lessons Learned From Online vs. Paper‐based Computer Information Students' Evaluation System. Information Systems Education Journal, 3(37). Retrieved from http://isedj.org/3/37/ISEDJ.3%2837%29.Liegle.pdf

Marlin, J. (1987). Student Perceptions of End-of-Course Evaluations. The Journal of Higher Education, 58(6): 704-716.

Nasser, F., & Fresko, B. (2002). Faculty Views of Student Evaluation of College Teaching. Assessment & Evaluation in Higher Education, 27(2): 187-198.

Norris, J., & Conn, C. (2005). Investigating Strategies for Increasing Student Response Rates to Online-Delivered Course Evaluations. Quarterly Review of Distance Education, 6: 13-29.

Nulty, D. (2008, June). The adequacy of response rates to online and paper surveys: what can be done? Assessment & Evaluation in Higher Education, 33(3), 301-314. Retrieved from http://public.clunet.edu/~mondsche/misc/Nulty.pdf.

Perrett, J. 2013. Exploring graduate and undergraduate course evaluations administered on paper and online: A case study. Assessment & Evaluation in Higher Education, 38(1): 85-93.

Ravenscroft, M. & Enyeart, C. (2009). Online Student Course Evaluations: Strategies for Increasing Student Participation Rates: Custom Research Brief. Education Advisory Board, Washington D.C. Retrieved from:http://tcuespot.wikispaces.com/file/view/Online+Student+Course+Evaluatio...

Spencer, K. & Pedhazur Schmelkin, L. (2002). Student Perspectives on Teaching and its Evaluation. Assessment & Evaluation in Higher Education, 27(5): 397-409.

Thorpe, S. W. (2002). Online student evaluation of instruction: An investigation of non-response bias. Paper presented at the 42nd annual Forum for the Association for Institutional Research, Toronto, Ontario, Canada.

University of British Columbia, Vancouver. (2010, April 15). Student Evaluations of Teaching: Response Rates. Retrieved fromhttp://teacheval.ubc.ca/files/2010/05/Student-Evaluations-of-Teaching-Re...