Good teaching is difficult to define. Our scholarly approach to understanding many things is to break a concept down into its parts so we can examine and fully understand the components. But good teaching is difficult to consider in a reductionist fashion. Do good teachers smile at least three times per class session or return exams within two days? Do they respond to student questions with a Socratic response? How do we operationalize good teaching so that we can accurately distinguish good teaching from bad and justify labeling it as such? Or do we like Justice Potter in 19641 who, admitting he could not adequately define what is obscene in a case needing a definition of pornography, simply have to say, I know it when I see it?
The resistance some faculty have to evaluation is based in part on the idea that good teaching cannot be adequately measured. It cannot be reduced to measurable parts. And that’s at least partly true. One cannot determine quality teaching by looking at a single data point. Even measures of student learning as a means of determining instructional quality do not account for variables beyond instructor control such as prior knowledge and student effort. And while multiple measures certainly more accurately reflect the quality of teaching happening, taken together, they too cannot just be added up to a definitive number that adequately represents instruction. Surely good teaching is more than the sum of its parts. That’s where our “know it when I see it” interpretation has to come in.
Some instructors may be really good at engaging students in a topic during a face to face class. They are entertaining--funny or provocative perhaps--and really get students to explore a topic. Another instructor may be less effective with this, but through engaging assignments and individual feedback can move students along the path to learning very effectively. The entertaining professor may be observed by a colleague and deemed to be an exceptional professor while our second professor might be considered average at best. Is that observer right?
What about student feedback? Is an instructor who gets average ratings from students a poorer instructor than one who regularly gets higher ratings? Certainly, consistently low ratings means there is something going on that should be considered, but there may be other reasons for the ratings such as the instructor being required to teach out of their specialty or having courses consistently overloaded with students. Are students right in their assessment?
What about more objective measures such as evidence that an instructor uses a lot of active learning, authentic learning activities and assessments, or provides plenty of time outside of class to help students? Do these indicators tell us for sure that quality instruction is happening?
In all these cases, each measure gives us good data. If we add them all up, however, is the sum a good measure of instructional quality? In some cases, it may be, but for most instructors, the reality is more complicated. Those conducting an evaluation must consider the available forms of evidence (and again, the more the better), but at some point, they have to make an informed, thoughtful and intentional evaluative judgement. The chair, or dean, or faculty committee looks at the widest evidence possible and concludes, “this is good instruction” or not (or perhaps, “this is instruction that is alright but could be better.”). Such judgements, however, must be made on more than a hunch; more than just “I know it when I see it,” because the judgement has to be defended. That is why institutions need a system for collecting a wide range of evidence, a method for evaluating each evidence, and a way to help evaluators reach that final conclusion.
David Pollock, PhD
Faculty Development Specialist
Does your institution evaluate instruction effectively? IDEA has resources to help users of our Student Ratings of Instruction as well as some useful resources for any institution. See our Balanced Faculty Evaluation webpage for more detail.