Regular readers of Edwired know that much of my career in higher education has been devoted to the improvement of teaching in history, both at the college and K-12 level. In fact, one of the main goals of this blog was and remains the improvement of history teaching through a broader conversation about what constitutes effective classroom practice.
Imagine my surprise when I read last month that Texas A&M University has decided to issue bonuses to faculty members by using end of semester student evaluations as the metric for deciding who and who does not get the cash. This new policy, defended by, among others, the Texas Public Policy Foundation, is designed to give professors at a major research university a financial incentive to focus more on teaching and, one assumes, less on research.
If I’d only known that the way to insure good teaching was to pay cash bonuses based on student satisfaction, I could have saved myself the trouble of doing so much research on teaching, writing about teaching in journals, books, and in this blog, and traveling around the country meeting with professors who want to improve their teaching and their students’ learning!
Silly me. It turns out that faculty members really just want someone to show them the money.
Or so the folks at Texas A&M think.
If only it were so simple. Anyone who has been paying close attention over the past decade knows that end of semester student evaluations have, by and large, gotten more and more useful as institutions have rewritten the questions, added questions, and looked more carefully at the data these surveys generate. It is less and less common to have a three question survey that essentially asks a student if he or she was happy at the end of the semester. Our own survey, for example, now has something like 20 specific questions that allow one to look for correlations within and across classes taught to determine what sorts of things elicit positive and negative student responses.
But these end of semester surveys remain what they are–customer satisfaction surveys. I have yet to see one that provides any sort of useful data on learning. And that’s the goal, isn’t it, to promote learning?
I sympathize with the president of Texas A&M in his desire to get faculty in the departments to pay closer attention to teaching and to focus more on student outcomes, and I’m not against rewarding good teaching with money. In fact, I think we don’t reward excellent educators well enough in a system where the reward structure remains tilted toward research output.
And I want to stipulate that I am a big fan of end of semester evaluations from students. I think they are an essential part of the evaluation of what is happening (or not happening) in a class.
But there are better ways to reward good teaching than a single source evaluation. For instance, if I wanted to evaluate excellent teaching in an introductory class, one of the measures (among many) is how the students in that course performed in subsequent courses in that same disciplinary area. So, how did the students taking Chemistry 101 do in 201? Or the students in German 103…how did they do in German 104? Or my Western Civ students…how did they do in a 300 level history course? This sort of longitudinal evaluation would yield very useful data on the learning taking place in those introductory courses.
In this day of large databases it is not difficult (but not trivial either) to create algorithms that would point out faculty members whose students do exceptionally well in later courses in their disciplines. Why not include some of these data along with student evaluations and peer evaluations in an evaluation of teaching excellence? And, while we’re at it, why not also include an evaluation of the research side of a professor’s teaching — everything from how syllabi are constructed to what she may have published about teaching?
Across the English-speaking part of our planet there is a large and growing movement devoted to the Scholarship of Teaching and Learning and many of the practitioners of this domain of scholarship have produced a lot of solid research on what does and does not constitute effective teaching.
Why a research university would so stubbornly ignore a body of research in favor of just throwing money at a problem is a puzzle. My only conclusion, drawn entirely from my peering in from outside, is that the goal is not to improve teaching at Texas A&M, but rather to score publicity points. That I’m even writing about this is an indication that in that, at least, Texas A&M succeeded.
I like the idea of creating longitudinal studies, but I have a quick question: how does such a study pinpoint and separate out the effect that one intro. survey class had on a student’s later performance from among the many other factors that could have impacted that student (e.g., other classes, a life-changing event, their decision to read Das Kapital one day, etc.)?
Another question: in running their customer satisfaction survey, does Texas A&M attempt at all to monitor or prevent the grade inflation this set up will encourage?
What did you think of the national database called for by the Commission on the Future of Higher Education?
Here is the latest news on a national database.