The IDEA Blog



Read the thoughts and impressions on a variety of topics written by IDEA staff as well as occasional guest bloggers.

Myths and Misconceptions of Student Ratings: Gender Bias and More Webinar
January 13, 2017

In a webinar held last spring, IDEA President, Dr. Ken Ryalls, and IDEA Senior Research Officer, Dr. Steve Benton, responded to some common myths and misconceptions of student ratings and took a deeper look at the hot button topic—bias. View the webinar in its entirety here

Myths, Misconceptions and Bias
Many myths and misconceptions exist around student ratings of instruction. Ryalls and Benton took a look at a few for the audience.  

Q. Are students competent enough to rate teaching?

A. Students observe teaching more than anyone else so for that reason alone, it makes sense to consider what they have to say. Student ratings have shown to correlate positively with other measures of teaching effectiveness, student achievement measures, and motivation for future learning.

Q. Students just want easy courses, and easy courses are rated higher than difficult ones. Is this a true statement?

A. Ratings are actually higher when students report the instructor sets high achievement standards. More specifically, ratings tend to be lower when students perceive the class as too easy or too difficult, and highest when the class is appropriately challenging.

Q. People tend to act—meaning fill out a survey—only when they are angry. Is that true?

A. The opposite holds true. In a 2012 study by Adams & Umbach, students who earn a low grade or no grade in a course are LESS likely than others to respond to surveys.

Q. Does faculty grading of student’s work affect ratings?

A. In a 2003 study done by Centra, involving over 50,000 classes, it was found that the grades students expect to earn are only weakly related to student ratings. And this low positive correlation does not necessarily indicate instructors are lowering standards to get higher ratings.

A. In our own research, conducted in nearly 500,000 classes at over 300 institutions, we found that high ratings are more likely when students say their teacher challenged them and had high achievement standards.

More than half of the webinar focused on the topic of gender bias, and bias overall. Below is a sampling of what was discussed which included some thoughtful questions from the audience. 

Q. Have there been any quality studies done around student ratings where gender bias has been found to be meaningful?

A. We're not finding evidence of gender bias in our Student Ratings of Instruction. Our own research at IDEA indicates male and female instructors have similar ratings on relevant learning objectives and almost identical ratings on overall summary ratings of teaching and course excellence.

A. Centra & Gaubatz did not find much either in independent research. The slight tendency for female students to give higher ratings to female instructors is not substantial enough to affect teaching evaluations.

Q. Very few quality studies have been conducted on racial or ethnic bias in student ratings specifically. But, if suspected, what should be done from a ratings perspective?

A. We have to recognize that all measures are flawed, and therefore multiple indicators of teaching effectiveness should always be used. As with any other bias, if administrators and faculty suspect racial or ethnic bias then additional indicators of teaching effectiveness, such as peer observation and self-evaluation, become increasingly important.

Q. So if the potential for bias is there, what can be done to counteract the potential effect on ratings?

A. Even if overall we're not showing a pattern of bias with IDEA's SRI, it still could be that your college is a hotbed of bias for whatever reason. We encourage you to adjust as you feel comfortable if you find evidence of bias. It is important to note that lower scores do not necessarily mean bias—it could be that a particular set of teachers could use more improvement, and they happen to be a particular gender or race.

Q. So again, the question then is not “Is there bias in this tool?” but “Can we find usefulness in these data in spite of bias inherent in humans?”

A. STUDENT VOICE matters, whether or not they're biased—when you ask about teaching and learning and not personal characteristics. They can give any teacher valuable feedback on how to get better.

As was said throughout the webinar, the staff at IDEA are committed to improving learning in higher education through research, assessment and professional development. We welcome you to contact us at anytime to discuss your institution’s specific challenges and goals and to learn how we may be of assistance. 

For an in-depth examination on the topic, take a closer look at IDEA's research and commentary on the topic:
References Adams, M. J., & Umbach, P. D. (2012). Nonresponse and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53, 576-591. Centra, J.A. (2003). Will teachers receive higher student evaluations by giving higher grades and less course workResearch in Higher Education, 44, 495-518. Centra, J. A., & Gaubatz, N. B. (2000). Is there a gender bias in student evaluations of teaching? Journal of Higher Education, 70, 17-33.


blog comments powered by Disqus



301 South Fourth Street, Suite 200, Manhattan, KS 66502
Toll-Free: (800) 255-2757   Office: (785) 320-2400   Email Us

GuideStar Gold Participant