Externally Published Research

Published Research by IDEA Staff

  • Benton, S. L., & Li, D. (2015). Professional development for online adjunct faculty. The Department Chair, 26, 1-3.

    Makes recommendations to department chairs about how to provide professional development for online adjunct faculty. Chairs are encouraged to establish and maintain communication channels, know their faculty, be available and approachable, encourage professional connections, involve adjuncts in departmental decision making, build a collection of resources, and participate in their own professional development.
  • Benton, S. L. (2015). Student ratings of instruction in lower-level postsecondary STEM classes (pp. 59-72). In Searching for better approaches: Effective evaluation of teaching and learning in STEM. Tucson, AZ: Research Corporation for Science Advancement.

    Compares IDEA Student Ratings of Instruction (SRI) administered in STEM (science, technology, engineering, and math) and non-STEM classes. Methods most highly correlated with progress among first-year/sophomore general education STEM students are setting challenging goals, finding ways to help students answer their own questions, and explaining material clearly and concisely.
  • Benton, S. L. (2014). Closing the gap in chairs’ perceptions. The Department Chair, 24, 4.

    Reports on analyses conducted on data collected from The IDEA Feedback System for Chairs. Chairs place the greatest emphasis on attending to administrative details, communicating department needs to the dean, and establishing trust between themselves and the faculty. Faculty rate chair performance of the first two very high.
  • Benton, S. L., & Cashin, W. E. (2014). Student ratings of instruction in college and university courses. In Michael B. Paulsen (Ed.), Higher Education: Handbook of Theory & Research, Vol. 29 (pp. 279-326). Dordrecht, The Netherlands: Springer.

    The authors review research addressing the reliability and validity of student ratings of instruction, including relationships between SRI and other variables, possible sources of bias, ratings administered online versus on paper, and ratings in online versus face-to-face courses. Recommendations are made for the appropriate use of student ratings and for future research.
  • Benton, S. L., Li, D., & Brown, R. (2014). Transactional distance in online graduate courses at doctoral institutions. Journal of Online Doctoral Education, 1, 41-55.

    Compares IDEA Student Ratings of Instruction (SRI) in graduate/professional online and face-to-face classes offered at doctorate-granting institutions. Instructors in soft disciplines are more likely to employ active learning approaches if the course is taught online than face-to-face. Most instructors in hard disciplines rely upon lecture regardless of course format. Students in online classes perceive their instructor expects them to take a greater share of responsibility for learning than do those in face-to-face classes.
  • Benton, S. L., Duchon, D., & Pallett, W. H. (2013). Validity of self-reported student ratings of instruction. Assessment & Evaluation in Higher Education, 38, 377-389.

    Examines the relationship between student ratings of progress on IDEA learning objectives and exam performance in a college course. Students who rated their progress on relevant objectives as either exceptional or substantial outperformed those who reported moderate or less progress.
  • Benton, S. L., Li, D., Gross, A., Pallett, W. H., & Webster, R. J. (2013). Transactional distance and student ratings in online college courses. American Journal of Distance Education, 27, 207-217.

    Compares IDEA Student Ratings of Instruction in courses offered either exclusively online or face to face. Online instructors are less likely to lecture and more likely to use discussion, especially in hard disciplines. Online courses are less common in hard and pure disciplines. Online courses are more structured and place greater expectations on students to share in responsibility for learning. But, they are less likely to stimulate student interest, and student effort in the course is lower.
  • Benton, S. L., & Pallett, W. H. (2013, January). Class size matters. Inside Higher Education. Retrieved from http://www.insidehighered.com/views/2013/01/29/essay-importance-class-size-higher-education.

    Compares IDEA Student Ratings of Instruction by class size groupings. Instructors in very large classes are more likely to lecture than those in small and medium classes. Students taking classes of small- and medium-size report greater instructor use of hands-on projects, real-life activities, projects requiring original or creative thinking, group work, and collaborative learning. The same students report relatively greater progress on relevant learning objectives, higher motivation, and better work habits.
  • Middendorf, B. J., & Benton, S. L. (2009). Trends in chair responsibilities. The Department Chair20, 23-25.

    Reports on analyses performed on faculty ratings of the department chair’s performance, using IDEA’s Feedback System for Chairs (FSC). Faculty gave the highest ratings on responsibilities chairs deemed most important: departmental operations, faculty enhancement, and research and faculty development. Faculty enhancement was the responsibility most highly correlated with faculty overall summary judgments of the chair’s performance.
  • Cashin, W. E. (1999). Student ratings of teaching: Uses and misuses. In P. Seldin, & Associates, Changing practices in evaluating teaching: A practical guide to improved faculty performance and promotion/tenure decisions (pp. 25-44). Bolton, MA: Anker.

    Spells out key uses and misuses of student ratings of teaching, discusses the research on which variables seem to bias ratings and which do not, and outlines specific recommendations on what to do and what not to do in using student ratings to evaluate teaching.
  • Cashin, W. E. (1997). Should student ratings be interpreted absolutely or relatively? Reaction to McKeachie (1996). Instructional Evaluation and Faculty Development16, 14-19.
  • Cashin, W. E., & Downey, R. G. (1995). Disciplinary differences in what is taught and in students’ perceptions of what they learn and of how they are taught. In N. Hativa, & M. Marincovich (Eds.), Disciplinary differences in teaching and learning: Implications for practice: New Directions for Teaching and Learning, No. 64 (pp. 81-92). San Francisco: Jossey-Bass.

    Investigated whether Biglan clusters of academic disciplines (hard/soft, pure/applied, life/nonlife) could be used to explain disciplinary differences in college student ratings of instruction. It was found that Biglan clusters do not explain the differences.
  • Cashin, W. E., Downey, R. G., & Sixbury, G. R. (1994). Global and specific ratings of teaching effectiveness and their relation to course objectives: Reply to Marsh (1994). Journal of Educational Psychology86, 649-657.

    Using Marsh’s (1994) criterion variables, findings support the conclusion that global items account for most of the variance in criterion measures of teaching effectiveness and may be used for summative evaluation.
  • Cashin, W. E. (1992). Evaluating university faculty with special reference to agricultural economists. In J. Nielson (Ed.), Departmental management and leadership: First national workshop for agricultural economics department chairs, (pp. 107-125). Seattle: Regional Organizations of Agricultural Economics Department Chairs.
  • Cashin, W. E. (1992). Student ratings: The need for comparative data. Instructional Evaluation and Faculty Development, 12, 1-6.

    Comparative data are needed for student ratings of faculty performance because of the considerable inflation of student ratings, the great variability in the way students rate different items, and factors which bias student ratings. Without comparative data, use of student ratings for teaching improvements is misleading and use for personnel decisions is inaccurate at best.
  • Cashin, W. E., & Downey, R. G. (1992). Using global student ratings for summative evaluation. Journal of Educational Psychology, 84, 563-572.

    Evaluates the usefulness of global items in predicting weighted-composite evaluations of teaching. Because global items account for a substantial amount of variance, a short evaluation form could capture much of the information needed for summative evaluation.
  • Cashin, W. E. (1990). Students do rate academic fields differently. New Directions for Teaching and Learning, 113-121.

    Examines research on variables that may bias student ratings of faculty have found them generally insignificant, but students do rate differently by academic field. The real problem arises from not knowing why this occurs. Institutions and individuals should decide how they will take these differences into consideration when interpreting student ratings.
  • Griffin, R. W., & Cashin, W. E. (1989). The lecture and discussion method for management education: Pros and cons. Journal of Management Development, 8, 25-32.

    Discusses the strengths and weaknesses of the lecture method in management education and suggests several techniques for improving the effectiveness of lectures.
  • Cashin, W. E., & Perrin, B. M. (1983). Do college teachers who voluntarily have courses evaluated receive higher student ratings? Journal of Educational Psychology, 75, 595-602.

    Examines the relationship between student ratings and the degree of choice the instructor has in deciding whether a given course will be evaluated. Voluntariness of evaluation does not seem to be an important variable to control when comparing student ratings.
  • Cashin, W. E. (1983). Concerns about using student ratings in community colleges. New Directions for Community Colleges, 41, 56-65.

    Addresses general problems related to faculty evaluation systems and student ratings and specific problems more common to community colleges. Recommends using student rating data in conjunction with other sources of information to compensate for its limitations.
  • Hoyt, D. P., & Reed, J. G. (1977). Salary increases and teaching effectiveness. Research in Higher Education, 7, 167-185.

    The relationship between salary increases and student ratings of teaching effectiveness was studied for a sample of 266 faculty members at Kansas State University. In general, there was a modest but significant correlation between ratings and percent salary increase, with correlations more pronounced in social science and humanities.
  • Hoyt, D. P., & Spangler, R. K. (1976). Faculty research involvement and instructional outcomes. Research in Higher Education, 4, 113-122.

    Faculty most heavily involved in research (as rated by department heads) were found to establish higher academic standards (as rated by students) than those less involved in research. In natural-mathematical sciences student ratings were positively related to research involvement; in social-behavioral were negatively related.

Published Research by Others

  • Brocato, B. B., Bonanno, A., & Ulbig, S. (2015). Student perceptions and instructional evaluations: A multivariate analysis of online and face-to-face classroom settings.Education and Information Technologies, 20, 37-55.
  • Johnson, J. F., Bell, E., Bottenberg, M., Eastman, D., Grady, S., Koenigsfeld, C., … Schirmer, L. (2014). A Multiyear Analysis of Team-Based Learning in a Pharmacotherapeutics Course. American Journal of Pharmaceutical Education, 78(7), 142. 
  • Anderson, M. M., & Shelledy, D. C. (2013). Predictors of student satisfaction with allied health educational program courses. Journal of Allied Health, 42, 92-98.
  • Mohr, D. J., Sibley, B. A., Townsend, J. S. (2012). Student Perceptions of university physical activity instruction courses taught utilizing sport education. Physical Educator, 69, 289-307.
  • Hale, L. S., Mirakian, E. A., & Day, D. B. (2009). Online vs. classroom instruction: Student satisfaction and learning outcomes in an undergraduate allied health pharmacology course. Journal of Allied Health, 38, 36-42.
  • Sonntag, M. E., Bassett, J. F., & Snyder, T. (2009). An empirical test of the validity of student evaluations of teaching made on RateMyProfessors.com. Assessment & Evaluation in Higher Education, 34, 499-504.
  • Thurston, L. P., & Middendorf, B. J. (2009). Evaluating department chair and student leadership in higher education. Educational Considerations, 37, 11-18.
  • McAlpine, L., Oviedo, G. B., & Emrick, A. (2008). Telling the second half of the story: Linking academic development to student experience of learning. Assessment & Evaluation in Higher Education, 33, 661-673.
  • Wright, F. X., & Huguet, M. P. (2008). From chalk to electrons—blended engineering education. Proceedings of the American Society for Engineering Education Conference.
  • Dee, K. C. (2007). Student perceptions of high course workloads are not associated with poor student evaluation of instructor performance. Journal of Engineering Education, 96, 69-78.
  • Walvoord, B. E. (2007). Teaching and learning in college introductory religion courses. Wiley-Blackwell.
  • Klecker, B. M. (2007). The impact of formative feedback on student learning in an online classroom. Journal of Instructional Psychology, 34, 161-165.
  • Albano, L. D. (2006). Classroom assessment and redesign of an undergraduate steel design course: A case study. Journal of Professional Issues in Engineering Education and Practice, 132, 306-311.
  • Jiusto, S., & DiBiasio, D. (2006). Experiential learning environments: Do they prepare our students to be self-directed, life-long learners? Journal of Engineering Education, 95, 195-204.
  • Mehta, S., & Kou, Z. (2005). Research in statics education—do active, collaborative, and project-based learning methods enhance student engagement, understanding, and passing rate? Proceedings of the American Society for Engineering Education Conference.
  • Boser, R., & Stier, K. W. (2005). Implementation of program assessment in a technical ITEC department. Journal of Industrial Technology, 21(2).
  • Emiliani, M. L. (2004). Improving business school courses by applying lean principles and practices. Quality Assurance in Education, 12, 175-187.
  • Frazee, J. (2003). Implementing a student feedback system: Implications for pedagogical growth. Proceedings of Society for Information Technology and Teacher Education International Conference.
  • Brosky, J. A., Hopp, J. F., Miller, T. B., & Deprey, S. M. (2001). Integrating theory and practice on wellness and prevention with older adults in self-contained clinical education experiences. Journal of Physical Therapy Education, 15, 29-36.


  • Gebb, P.M. (June 2016). Reflection within a professional develop curriculum: A study of professional development transfer using student ratings of instruction as an indirect measure (Doctoral dissertation). 
    Retrieved from http://www.ben.edu/college-of-education-and-health-services/higher-education/research.cfm.
  • Forte, G. L. (2015). An Analysis of Variance between Students’ Evaluations of Teaching Methods and Styles of Distance and Face-to-Face Classes through the Lens of Transactional Distance Theory (Doctoral dissertation). The George Washington University, Washington D.C.
  • Good, K. (2015). Investigating relationships between educational technology use and other instructional elements using “big data” in higher education (Doctoral dissertation). Iowa State University, Ames, Iowa.
  • Feit, C. R. (2014). Student ratings of instruction and student motivation: Is there a connection (Doctoral dissertation). Kansas State University, Manhattan, Kansas.
  • Hobler, D. A. (2014). An exploratory study of the longitudinal stability and effects of demographics on student evaluations of teaching (Doctoral dissertation). Capella University, Minneapolis, Minnesota.
  • Glover, J. I. (2012). Finding the right mix: Teaching methods as predictors for student progress on learning objectives (Doctoral dissertation). Kansas State University, Manhattan, Kansas.
  • Ruby, B. (2012). Evaluating faculty member perceptions regarding the use of student evaluations to improve teaching and course effectiveness at a proprietary, career, art/design college (Doctoral dissertation). Argosy University, Phoenix, Arizona.
  • Smith, G. V. (2011). Transformative learning in the adjunct faculty development process: The promotion of self-reflection (Doctoral dissertation). Argosy University, Phoenix, Arizona.
  • Chappell, K. (2009). Student perceptions of effective instructional practices in liberal arts colleges and the implications for improving teaching in higher education (Doctoral dissertation). Argosy University – Twin Cities, Eagan, Minnesota.
  • Hornbeak, J. L. (2009). Teaching methods and course characteristics related to college students’ desire to take a course (Doctoral dissertation). Kansas State University, Manhattan, Kansas.
  • Middendorf, B. J. (2009). Evaluating department chair’s effectiveness using faculty ratings (Doctoral dissertation). Kansas State University, Manhattan, Kansas.
  • Maffett, L. (2006). An evaluation of the effectiveness of the flight 3 mentor training program (Doctoral dissertation). Nova Southeastern University, Fort Lauderdale-Davie, Florida.
  • Lundquist, J. C. (2006). Discipline differences in student rating of college faculty (Masters thesis). Sam Houston State University, Huntsville, Texas.
  • Billimek, T. E. (2004). A comparative study of faculty performance in public Texas community colleges granting tenure versus public Texas community colleges not granting tenure(Doctoral dissertation). Capella University, Minneapolis, Minnesota.
  • Weir, J. A. (2004). Active Learning in Transportation Engineering Education (Doctoral dissertation). Worcester Polytechnic Institute, Worcester, Massachusetts.
  • King, J. M. (2000). Learner-centered teacher beliefs and student-perceived teaching effectiveness (Doctoral dissertation). University of North Texas, Denton, Texas.
  • Carson, R. D. (1999). Utilizing cognitive dissonance theory to improve student ratings of college faculty (Doctoral dissertation). Texas Tech University, Lubbock, Texas.
  • Peter, R. M. (1998). An analysis of emotionally disturbed students’ perceptions of teacher effectiveness and implications for administrative evaluation (Doctoral dissertation). Seton Hall University, South Orange, New Jersey.
  • Osborn, W. J. (1996). A study of the impact of clinical supervision on classroom teacher behaviors at one community college (Doctoral dissertation). University of Kansas, Lawrence, Kansas.
  • Famatid, C. D. (1995). Quality of teaching and academic achievement as measures of organizational effectiveness in the management of state tertiary institutions (Doctoral dissertation). West Visayas State University, La Paz, Iloilo City.
  • Mays, J. L. (1994). The relationship between organizational structure and the power of department chairs in community colleges (Doctoral dissertation). The University of Texas at Austin, Austin, Texas.
  • Loftin, L. B. (1993). The role of quality instruction in persistence, attrition, and recruitment of college science/mathematics/engineering majors (Doctoral dissertation). University of New Orleans, New Orleans, Louisiana.
  • Li, Y. (1993). A comparative study of Asian and American students’ perceptions of faculty teaching effectiveness at Ohio University (Doctoral dissertation). Ohio University, Athens, Ohio.
  • Sutliff, M. A. (1992). Comparison of the perceived teaching effectiveness of full-time faculty, graduate teaching assistants, coaches, and part-time faculty at selected universities in Tennessee (Doctoral dissertation). Middle Tennessee State University, Murfreesboro, Tennessee.
  • Beadle, M. E. (1988). A study of the relationships among instructional strategies based on the five stages of group development and instructional goals (Doctoral dissertation). Kansas State University, Manhattan, Kansas.
  • Jacobsen, R. H. (1988). The impact of faculty incentive grants on teaching effectiveness (Doctoral dissertation). Temple University, Philadelphia, Pennsylvania.
  • Burbano, C. M. (1987). The effects of different forms of student ratings feedback on subsequent student ratings of part-time faculty (Doctoral dissertation). University of Florida, Gainesville, Florida.
  • Pierce, S. T. (1986). A comparative analysis of part-time versus full-time community college faculty effectiveness (Doctoral dissertation). North Carolina State University, Raleigh, North Carolina.
  • Gorsky, E. L. (1985). A comparative study of the perceived quality of off-campus graduate credit courses in education (Doctoral dissertation). Kansas State University, Manhattan, Kansas.
  • Brandenburg, M. (1985). Communicator style and its relationship to instructional effectiveness in collegiate business education (Doctoral dissertation). Oklahoma State University, Stillwater, Oklahoma.
  • Hirst, W. A. (1982). A study to identify effective classroom teaching competencies for community college faculty (Doctoral dissertation). Kansas State University, Manhattan, Kansas.
  • Rainey, P. E. (1981). Effects of intended usage and class performance level on students’ ratings of teachers in four-year, technology-oriented college curricula (Doctoral dissertation). Texas A&M University, College Station, Texas.
  • Clegg, V.L. (1979). Teaching behaviors which stimulate student motivation to learn (Doctoral dissertation). Kansas State University, Manhattan, Kansas.
  • Zunder, P. M. (1977). The use of goal attainment and subjective utility in the evaluation of instruction (Doctoral dissertation). The University of Vermont, Burlington, Vermont.
  • Robustelli, J. A. (1977). The effect of the perceived use of student appraisal data on the ratings given by students of faculty teaching performance (Doctoral dissertation). The University of Tennessee, Knoxville, Tennessee.
  • McKee, B. G. (1977). The relationships between college students’ rating of instruction and their course-oriented attitudes (Doctoral dissertation). Syracuse University, Syracuse, New York.



Toll-Free: (800) 255-2757   Office: (785) 320-2400   Email Us

GuideStar Gold Participant