Quantifying the Humanities

The rising importance of metrics for evaluation in higher education has more than a few of my friends and colleagues on edge. What will it mean, for instance, when colleges and universities see the same sorts of assessment data generated for the humanities that already exist in K-12 education? Will we see graduation exams in History or English? How does one quantify the many years spent researching and writing a book of history? How will these data be used?

While I think college faculty are right to ask probing questions about the quantification of their efforts in the classroom and in their research, I think it’s wrong-headed to assume that any and all attempts to quantify educational or scholarly endeavors are somehow an evil conspiracy to undermine our academic freedom and integrity.

For instance, read Jennifer Howard’s very interesting article in the Chronicle of Higher Education from October 10, 2008 (“New Ratings of Humanities Journals Do More Than Rank — They Rankle“). For those of you without online access to the Chronicle, the story begins:

A large-scale, multinational attempt in Europe to rank humanities journals has set off a revolt. In a protest letter, some journal editors have called it “a dangerous and misguided exercise.” The project has also started a drumbeat of alarm in this country, as U.S.-based scholars begin to grasp the implications for their own work and the journals they edit.

I would submit that one implication is that academic c.v.s will be much easier to make sense of. This past year I was on a committee in our Center for Teaching Excellence charged with helping nominees for a state award navigate the process. My two charges were in the Psychology Department and although I know nothing about the relative merits of various Psychology journals, I could quickly see which of their articles was in the more difficult-to-publish-in journals. Why? Because academic journals in Psychology publish data on the acceptance rates of articles. It was therefore obvious to me at a glance that an article published in a journal with an 11% acceptance rate was probably more notable than one in a journal with a 78% acceptance rate.

Only in the humanities have we been so resistant to any sort of quantification of results. Almost every other major disciplinary category — sciences, engineering, health sciences, social sciences — rates and ranks almost everything they do. And in many of these disciplines college graduates are already subject to de facto graduation examinations administered by various licensing boards. So what makes the humanities so special?

Because I don’t think we are special enough to get a pass on quantification of effort, I was pleased to receive the announcement today that the Humanities Resource Center Online has gone live. A project of the American Academy of Arts and Sciences with some collaboration from organizations such as the National Endowment for the Humanities and the American Council of Learned Societies (among others), the HRC offers one-stop shopping for data on the humanities in the United States, much of it set in a global framework.

Want to know how much money was being invested in the humanities in a given year? Want to know about the academic preparation of high school history teachers? Want to know more about the participation of underrepresented groups in graduate programs in English? It’s all there. I applaud the work that has gone into this website and hope that as the years go by more and more data will be deposited there.

Why? Because I’m a historian and I believe in the value of evidence in arguments. The data on this site will make it possible to have much more informed conversations about what is happening (and just as importantly, what is not happening) in the humanities. So, for instance, when we complain that scholars in the humanities are underpaid relative to our peers in other disciplines, now we have the data to prove it. Or when we wonder why our majors seem to be less ethnically diverse than the rest of our student body, we can see how our local findings compare to national data sets.

All in all, I think the current iteration of this project is a great start and I look forward to its further elaboration in the years to come.