The IDEA Blog



Read the thoughts and impressions on a variety of topics written by IDEA staff as well as occasional guest bloggers.

Response to Concerns About “Flawed Evaluations”
June 22, 2015
By Steve Benton and Dan Li A recent article in Inside Higher Education (IHE) reported on a study conducted by the American Association of University Professors (AAUP) Committee on Teaching, Research and Publication. Based on responses from approximately 9,000 professors out of 40,000 surveyed (about a 23% response rate), the Committee reached some tentative conclusions about student ratings (a.k.a, “course evaluations”). One of the key criticisms voiced by respondents was institutions that had adopted online evaluations reported much lower student return rates than those who continued with paper evaluations: 20-40 percent versus 80 percent or higher. With such low response rates, faculty are concerned about the validity of student feedback (For suggestions about how to increase online response rates, see “Best Practices for Online Response Rates.”). Faculty fear written comments are from students with extreme views: “those very happy with their experience and/or their grade, and those very unhappy.” Given its low response rate, might a similar statement be made that the AAUP survey results are a reflection of faculty opinions of those very happy with student ratings and those very unhappy? Faculty also expressed concerns about gender bias in student ratings of instruction (SRI). Actually, the research is quite clear and consistent on this issue: The slight tendency for female students to give higher ratings to female instructors is not substantial enough to affect teaching evaluations, as long as administrators do not make fine discriminations and rely solely on SRI in teaching evaluation (see Centra & Gaubatz, 2000). Some professors also reported they believe “being a tough professor works against them in student evaluations.” Analysis of IDEA student ratings shows just the opposite: In classes where students report the instructor had high achievement standards overall ratings of teaching and the course tend to be higher. Another claim made in the article, reportedly voiced by Philip Stark, whose study we criticized in a previous blog (“An Evaluation of Course Evaluations” Part I and Part II), is the “growing body of evidence of [SRI] unreliability.” Actually, well-constructed SRIs have very high reliability (see review by Benton & Cashin, 2014). Students within the same class tend to be highly consistent in their ratings, and ratings of the same instructor across multiple courses are very reliable. In fact, one can make the case that students provide the most reliable source of feedback about teaching because they represent multiple perspectives acquired across multiple occasions. On other matters, we find much agreement with the AAUP study. Student ratings of instruction (SRI) should be supplemented with peer review and ongoing faculty development. We were pleased to read that 69 percent of respondents see the need for student feedback about their teaching. We also agree that institutions should end the practice of allowing SRI to serve as the only or primary indicator of teaching effectiveness. IDEA has long recommended that they count no more than 30 percent to 50 percent of the overall teaching evaluation. In the end, Colleen Flaherty, the IHE article’s author, quoted our own Ken Ryalls, President of IDEA. “When we stop thinking of evaluation as an event that occurs at the end of the semester and start thinking of it as an ongoing process that is based on multiple sources of information, we will begin to accept the value of student ratings gathered from a reliable and valid system.” Well said, Dr. Ryalls! References Benton, S. L., & Cashin, W. E. (2012). Student ratings of teaching: A summary of research and literature. IDEA Paper No. 50. Manhattan, KS: The IDEA Center. Benton, S. L., & Cashin, W. E. (2014). Student ratings of instruction in college and university courses. In Michael B. Paulsen (Ed.), Higher Education: Handbook of Theory & Research, Vol. 29 (pp. 279-326). Dordrecht, The Netherlands: Springer. Centra, J. A., & Gaubatz, N. B. (2000). Is there a gender bias in student evaluations of teaching? Journal of Higher Education, 70, 17-33.
blog comments powered by Disqus



301 South Fourth Street, Suite 200, Manhattan, KS 66502
Toll-Free: (800) 255-2757   Office: (785) 320-2400   Email Us

GuideStar Gold Participant