Oxford, Ohio 45056
(513) 529-1950 fax
Editor's note: Commentary provides university faculty and staff an opportunity to express their opinions in The Miami University Report. Contributions should be no longer than 500-600 words in length and should be directed to Bill Houk (physics), firstname.lastname@example.org. Published commentaries also will be posted online at www.muohio.edu/townsquare/commentary.
The Student Evaluation of Teaching: Off With Its Head!
As every faculty member knows, Miami, like most of its brethren, has a quaint but disturbing and ineffective method of measuring teaching quality: the student evaluation. According to anecdotal evidence, the system works for promotion and tenure decisions as follows. Student responses in each course to a statement such as “give an overall rating of your professor's performance” on a scale of 0 (poor) to 4 (superior) are tabulated and turned into an average. A professor whose mean is consistently at 3.00 or above (why 3?) receives the blessing of various P+T committees; the others are rejected, producing a system simple and neat but grossly unfair and deleterious to the operation of an institution once touted as a “Public Ivy.”
The case against student evaluations begins with a challenge to the assumption that students are qualified to provide assessments of university teaching. Their natural biases, lack of experience, and objectives often in conflict with those of the professor cast doubt on the claim that they can distinguish between a serious teacher and an entertainer with minimal knowledge. In a time when instant gratification is demanded, a professor who inflicts “pain” while striving for benefits to be derived well beyond the semester at hand will be punished.
The case continues by realizing that student evaluations measure not teaching effectiveness but customer satisfaction. This comes as a result of the era of entitlement in which we live, the cultivation of people as consumers, and the self-esteem project in our schools. A senior professor of my acquaintance says that the keys to high evaluation scores are a) making students believe that you “care” and b) “selling” the value of the course. Being consistent with Miami's revised status as a cross between a social services agency and a marketing organization, this assertion falls within the bounds of the coherence theory of truth.
Exclusive attention to the score on a summary question ignores the many corrupting influences, among them expected grade in the course, workload, standards for performance, and teaching practices relative to intradepartmental colleagues (competitors?), even those of faculty in other areas. Former Provost Crutcher's call for multiple measures has yet to take effect, and I am pessimistic about their ever overcoming the effects of “bad” student evaluations. The current attitude toward teaching portfolios is support for this claim. Those of candidates with good scores are praised; those of candidates with low scores are treated with suspicion as if an attempt to conceal some ugly facts is being made when in fact the purpose is to reveal a beautiful (real) truth.
As I see it, the only solution is to abolish student evaluations and to rely henceforth entirely on assessments of peers and other portfolio materials. To the one who says keep the student scores but give them small influence, I respond that given the entrenchment of such evaluations, this is impossible. It is similar to gazing at a woman beautiful in body, mind, and spirit who is beset by a goiter: one's attention will be disproportionately drawn to her neck. Of course, faculty members' biases (including those induced by allegiance to customer satisfaction) will affect assessments, but an observer should be able to evaluate the evaluator and discard any resulting unholy assertions. Finally, we depend on peer evaluations in the scholarly realm; why allow students to dictate outcomes associated with the vital matter of teaching, which is said to be a Miami's professor's most important activity?
Date Published: 12/09/2004