Technical Report No. 18—Validity Evidence for IDEA Student Ratings of Instruction
By Steve Benton
IDEA’s most recent Technical Report - No. 18 - describes the research behind upcoming revisions to the student ratings system. In this blog I highlight evidence for what will soon be called IDEA2.
Validity refers to whether evidence supports the interpretation of a test or score for its intended purpose. Validity of any assessment depends on proper interpretation and use. Unfortunately, SRIs are often over-emphasized in summative evaluation and under-utilized in formative evaluation. They are over-emphasized when faculty and administrators rely on them exclusively for evidence of teaching effectiveness in decisions about tenure, promotion, and merit salary adjustments. They are under-utilized when faculty ignore the developmental feedback provided on the report. Here are several sources of validity evidence found in Technical Report 18.
Correlations between Faculty and Student Ratings of Learning Objectives
In the IDEA system faculty rate the importance of each of 12 learning objectives and students rate their perceived progress. As has been true since the early development of IDEA, the correlations between instructor and student ratings of the same objective are higher than for non-corresponding objectives. So, students report greater progress on objectives their instructor identifies as Essential or Important than for those of Minor or no importance.
Relationships between Course Requirements and Progress on Learning Objectives
On the Faculty Information Form, instructors indicate how much emphasis they assign (Much, Some, None) to nine course requirements (e.g., writing, oral communication). Students report greater progress on associated relevant objectives (e.g., Developing skill in expressing myself orally or in writing) when the instructor requires much emphasis of the skill than none.
Relationships between Course Circumstances and Progress on Learning Objectives
Faculty also report whether each of nine course circumstances had a positive, negative, or neither a positive nor negative impact on learning. In general, instructors who rated course circumstances positively had higher ratings on excellence of the instructor and course and on average student progress on relevant objectives.
Correlations between Student Motivation and Student Types
Student ratings of their desire to take the course are higher for students in their intended specialization than for those taking a course to fulfill a general education or distribution requirement.
Correlations between Student Progress on Relevant Objectives and Actual Achievement
Across multiple sections of the same course taught by the same instructor, student ratings of progress on relevant course objectives are positively correlated with exam scores, whereas ratings on irrelevant objectives are not (Benton, Duchon, & Pallett, 2013).
Evidence Based on Internal Structure
Faculty and student ratings are multidimensional. More than one factor underlies ratings of learning objectives and teaching methods. Student ratings of teaching methods are differentially correlated in logical ways with ratings on learning objectives. For example, making it clear how each topic fit into the course is important for gaining factual knowledge but not for acquiring skills in working with others as a member of a team.
We encourage you to use Technical Report No. 18 as a resource and to always triangulate IDEA SRIs with multiple sources of evidence.
American Educational Research Association, American Psychological Association, National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
Benton, S. L., Duchon, D., & Pallett, W. H. (2013). Validity of self-reported student ratings of instruction. Assessment & Evaluation in Higher Education, 38, 377-389.