By Ken Ryalls, Ph.D.
This academic year has seen a rash of articles in the popular and pseudo-scientific press about the uselessness of student ratings of instruction and/or course evaluations. For every article that rolls out, our research team puts together a critique of the article in a cogent and professional manner, citing research, agreeing where warranted, and gently and rationally offering alternative perspectives on points with which we (and the vast body of scholarly research) disagree. We publish these quietly on our website (see some of the latest here
), constructively comment through our blog or Twitter, and get little to no attention from the news outlets. In cases which we have actually reached out to the media for the chance to rebut grossly misleading articles, we are rejected quickly or more often, just ignored. Poorly conducted research studies continue to spring up and the misperceptions persist – and not only persist, but get intense coverage in the popular press. Why? As an academic trained in social cognition, I find that the answer (or at least an educated guess) is actually quite easy to generate. As a member of an organization passionately committed to improving learning in higher education, however, I find myself pleading to my smug social psychologist self like the young fan of Shoeless Joe: “Say it ain’t so.”
What are the main complaints against course evaluations, in a nutshell?
- There is inherent bias in course evaluations against certain groups.
- Students are not qualified to evaluate an instructor.
- Information obtained from students is misused, often to the detriment of faculty.
As we at IDEA have repeatedly stated in various articles, these criticisms are entirely correct. “Finally!” shout the attackers. “You have admitted what we suspected all along!” Ah, but it is not these premises with which we take issue, but rather the conclusions drawn from the veracity of the premises. Let’s analyze these arguments one by one.
Bias: Of course there is bias in student feedback, as course ratings are surveys designed by humans and filled out by humans. Any attempt at psychological measurement is imperfect, and survey measures in general are fraught with psychometric pitfalls. So to point out bias in student ratings is at best naïve, and at worst a blatant attempt at sensationalism. Those who excitedly assert these instruments are biased as if discovering a new planet are missing a basic fact of humanity – we are all biased. There is no way to turn off your cognitive preconceptions, stereotypes, and expectations, and everything that we experience is processed through these filters. If you throw out the efficacy of student feedback based on bias, then throw out peer feedback and administrator feedback too. We might as well throw out Promotion and Tenure Committees, annual reviews, reference letters, and anything else that has a human element to it. Grades given by instructors are also useless, as instructors are human and therefore full of bias. To point out bias in a rating given by a human and use it to negate the usefulness of that rating is absurd. The question then is not “Is there bias in this tool?” but “Can we find usefulness in these data in spite of the bias inherent in humans?” The answer to this question is yes, provided the survey instrument is well designed.
Students As Evaluators: We agree that students are not qualified to evaluate instructors. If you are familiar with the IDEA SRI instrument, you know that it carries the name “Student Ratings of Instruction” for a reason: Students are providing feedback for the instructional aspects of the course, and not evaluating the instructor. We continuously correct those who call our SRI an evaluation tool, because it runs counter to our philosophy. Students provide feedback on instruction, which can then be effectively used for evaluative purposes by peers, chairs, and deans, provided that the data are part of a holistic assessment of instructor performance.
Misuse of Student Data: Are student ratings data misused? Of course – the world is not a perfect place. But misuse of data does not prove the uselessness of data, it merely shows that some people involved in the evaluation process can be fallible, or even have malicious intent. Training and development is needed around how student ratings data can be effectively included in an instructor’s evaluation, and we spend a lot of time at IDEA on just that issue (see here
). We find that the vast majority of the time our clients use student ratings data in a responsible manner, recognizing the voice of the student as a relevant part of a holistic analysis of instructor performance.
Student ratings of instruction inspire passion, as they can have a very personal effect on those who teach. Let us return to my original question of the reasons behind the relentless attack on student ratings of instruction, and why you can’t open a major academic news site without seeing another attack. I see two reasons driving this phenomenon. The first reason is easy to explain - attacks on student ratings drive readership as it is a topic that inspires passion, so media outlets love to give the topic front page space. Certain politicians appear way too often in the headlines for the same reason – love them or hate them, you’re motivated to read the article. While I wish every article were written in a balanced way, I understand the phenomenon and accept it as news outlets are motivated not only by truth, but by the bottom line as well.
The second reason behind the vitriol surrounding student ratings is more insidious, and points to a minority group of academics who wish to silence the voice of the student through the destruction of the only institutionalized means of gathering student feedback. Get rid of student ratings of instruction, and you render mute a group of people from whom you do not wish to hear feedback. These academics are only interested in information supporting their belief system, and vehemently shoot down any rational analysis counter to their point of view. For a depressingly revealing activity, read the comments section of any article pertaining to student ratings and you will find anecdote after anecdote about the uselessness of the information. Most start with platitudes such as “Everyone knows that these are useless…,” quickly followed by a chorus of stories from others about favorite student transgressions. While I do not deny that some of student feedback is sometimes misguided, useless, or even downright cruel, keep in mind that students spend more time observing the faculty member’s teaching than anyone else on campus. In addition, the student is quite literally the reason the course exists in the first place! There is something students can tell us that will provide insight into our teaching, isn’t there? To continue to assert that all student feedback is useless is ridiculous, especially if you’re asking the students the right questions.
Cynics who think that my words are motivated by sales of student ratings instruments please take note, as we at IDEA are first and foremost dedicated to the improvement of learning in higher education through research and faculty development. When someone invents a better way to gather feedback from students on the effectiveness of their instructional experiences, I guarantee that The IDEA Center will be the first to embrace the new method. But in the meantime, whether you use IDEA’s Student Ratings of Instruction or another well-designed research tool, the fact remains that you have an ethical obligation to use something. Student voice matters. Take time to listen. Ask good questions, use the data wisely, and you will reap the benefits of student feedback so that you can work on improving your teaching. Unless of course you really don’t want to be bothered.