Tag Archives: assessment

History Out of Tune

If you are a regular reader of this blog, it’s not news to you that I’ve offered up some critique of the AHA’s Tuning Project. After conversing with some “Tuners” at the recent annual meeting of the AHA in New York, I remain skeptical of the “History Discipline Core” that is the key source document of the effort.

Before offering further critique, I want to stipulate what I really like about the Tuning Project, because I like a lot of it. First and foremost, I like the fact that the proposed core will give history departments around the country a basis for solid, on-going assessment of the work they are doing in the classroom and the outcomes their students are achieving. Tuning gives us the chance to set the assessment agenda within our institutions rather than having it imposed on us.

Tuning also gives history departments a foundation upon which they might redesign their majors to make that major a curriculum, not just a basket of courses (as is so often the case).

I also like the way that the document encapsulates the core values of the historical educators, or at least the core values of the historical educators of the past 100 years or so. For reasons I cited in that earlier blog post (linked above), I remain critical of the almost complete exclusion of the digital humanities from the core being promoted by Tuners. I think we have to admit that the History Discipline Core is a statement of the past, not of the future–it promotes a version of history education that prepares our students very well for 1995, not 2015. Thus, I don’t have any quibbles with what is in the Core. My quibbles are with what is not there.

Finally, I really like the many obvious points of intersection between the work of those involved in Tuning and the work of those of us who have been engaged in the scholarship of teaching and learning in history over the past 15 years or so. I would love to see several sessions at the next AHA conference that explored this common ground in much more detail, because I think we have so much to share with and learn from one another.

Despite all these positives, I’m still unhappy with the goals of Tuning for this reason — I think that all the very laudable focus on core competencies of history students has obscured one of the larger goals of the effort, namely preparing students for success after college. I watched David McInerney’s keynote address at the AHA Tuning workshop in January [available here] and he didn’t get to the importance of student success in the workforce until near the end when he offered up a suggested elevator speech about Tuning.

Student success after college should be at the top of our list, not as an afterthought in an elevator speech.

I love the liberal arts as much or more than anyone I know, and I will (and do) defend the value of a liberal arts education to any and all comers. But the simple fact of the matter is this: America is a very different country than it was 20 or 40 years ago, and the students we have now and will be educating for the rest of our lifetimes are very different. Here are just a couple of data points that as history educators we must keep at the forefront of our work:

  • The majority of American public school students live in poverty.
  • In 1990, 28% of children in America were born to single mothers. In 2008 that number was just under 41%. [data here]
  • Americans are carrying more than $1 trillion dollars of student debt. Almost 70% of college graduates have debts just under $30,000 per year, and those are the graduates.
  • Only 59% of college students at BA granting institutions graduate in six years.
  • According to Jeff Selingo, in his College Unbound, if your family’s household income is in excess of $90,000, your odds of obtaining a bachelors degree by age 24 are 1:2. If your family’s household income is $35,000 or less, those odds drop to 1:17.

Given these facts, any revision of the history curriculum or of the ways we assess our success as educators must take into account the ways that we are responding to what can only be called an educational crisis.

Anything less would be shameful.

Thus, I urge the AHA and those involved in the Tuning project to be very explicit about the need to craft learning opportunities and curricula that prepare our students for success in very clear and explicit ways. That means, for instance, demonstrating again and again throughout the courses we teach how this or that element of historical thinking will help them when they are teachers, attorneys, advertising executives, museum educators, archivists, social workers, or whatever they end up doing.

But it also means writing experiential learning into our curricula in very explicit ways, not just as a single bullet point at the end of a list of “sample tasks.” Given the data I just cited above, and the fact that college is going to continue to get more expensive rather than less, we must, must redesign the history major so that it is both a liberal arts discipline and a degree that prepares students for success in the workforce. So, for instance, why not require internships of all our students (thereby committing ourselves to make that happen)? Why not devote one week in every class we teach to how something you learned in this class will help you in your future career(s)?

We have to do our part to address the challenges our students are and increasingly will face, and the Tuning Project offers historians an invaluable opportunity to do just that.

If we are unwilling to engage with our students real and pressing challenges, then I think we should fold up our tents and call it a day.

Rebuilding a Course Around Prior Knowledge

Of the many different courses I teach, the one I’ve made the fewest changes in over the past decade is my survey of modern Eastern Europe. Every other course I teach has been reconfigured in various ways as a result of my research into the scholarship of teaching and learning, but for some reason, I’ve never gotten around to altering this course. I’m ashamed to say that when I taught it last semester, it was really not that much different from the way I taught it for the first time way back in 1999.

I could offer various excuses for why that course seems so similar to its original incarnation, but really the only reason is inertia. I’ve rewritten four other courses and have created five others from scratch in the past six or seven years and because my East European survey worked reasonably well, it was last in line for renovation.

The good news for future students is that I’ve taught it that way for the last time.

Like all upper division survey courses, HIST 312 poses a particular set of challenges. Because we have no meaningful prerequisites in our department (except for the Senior Seminar, that requires students to pass Historical Methods), students can show up in my class having taken no history courses at the college level. And even if they had, the coverage of the region we used to call Eastern Europe is so thin in other courses, it is as though they had never taken another course anyway. That means I always spent a fair amount of time explaining just where we are talking about, who the people are who live there, and so on, before we get to the real meat and potatoes of the semester.

And then there is the fact that this course spans a century and eight countries (and then five more once Yugoslavia breaks up), it’s a pretty complex story.

To help students make sense of that complexity, over the years I’ve narrowed the focus of the course substantially, following Randy Bass’s advice to me many years ago: “The less you teach, the more they learn.” We focus on three main themes across all this complexity and by the end of the semester, most of the students seem to have a pretty good grasp of the main points I wanted to make. Or at least they reiterated those points to me on exams and final papers. And it’s worth noting that they like the course. I just got my end of semester evaluations from last semester and the students in that class rated it a 5.0 on a 5 point scale, while rating my teaching 4.94.

What I don’t know is whether they actually learned anything.

This semester I’m part of a reading group that is working its way through How Learning Works and this past week we discussed the research on how students’ prior knowledge influences their thinking about whatever they encounter in their courses. This chapter reminded me a lot of an essay by Sam Wineburg on how the film Forrest Gump has played such a large role in students’ learning about the Viet Nam wars. Drawing on the work of cognitive psychologists and their own research, Ambrose et al and Wineburg come to the same conclusion, namely, that it is really, really difficult for students (or us) to let go of prior knowledge, no matter how idiosyncratically acquired, when trying to make sense of the past (or any other intellectual problem).

The research they describe seems pretty compelling to me, especially because much of it comes from lab studies rather than water cooler anecdotes about student learning. Because it’s so compelling, I’ve decided to rewrite my course around the notion of working from my students’ prior knowledge. Getting from where they are when they walk in the room on the first day of the semester and where I want them to be at the final exam is the challenge that will animate me throughout the term.

My plan right now (and it’s a tentative plan because I won’t teach the course again for a couple of semesters) is to begin the semester with three short in class writing assignments on the three big questions/themes that run through the course. I want to  know where my students are with those three before I try to teach them anything. Once I know where they are, then I can rejigger my plans for the semester to meet them where they are rather than where I might like them to be. And then as we complete various segments of the course I’ll have them repeat this exercise so I can see whether they are, as I hope, building some sort of sequential understanding the material. By the end of the semester I ought to be able track progress in learning (at least I hope I will), which is an altogether different thing than hoping to see evidence of the correct answer compromise.

Playing With History

[9:30] Today and tomorrow I’m at the conference Playing With Technology in History at Niagara-on-the-Lake, Ontario. Day one is an unconference focused on the edges of the envelope in humanities computing. The sessions during the day include things like wearable computers, serious games, MakerBots and CraftRobo, barely games, walkabout applications for phones, along with good old fashioned issues like metrics for assessing student learning.

[11:00] I spent the first morning session in a session on making (see post on conference website). I’m particularly interested in this approach to history both because of what I’m writing in my book on teaching history in the digital age and because my teaching is more and more emphasizing turning my students loose on the past to create history in ways we haven’t thought through.

[12:00] In a session on the Great Unsolved Mysteries of Canadian History project. This is a project I’ve been using for years in my introductory history courses — the Who Killed William Robinson case — as a way of introducing my students to historical research in an engaging and rigorous way. Even though the Robinson case has nothing to do with Western Civ (the course I use it in), it introduces students to the difficulties of historical research, particularly working with documents that just aren’t very clear as to what they mean or don’t mean. We did an exercise where we did what I ask my students to do and then discussed what that meant to us as educators–what we learned from trying to learn like our students and how our expert knowledge about history, as opposed to these particular moments in history, helped us with the exercise. For me it was a lot of fun to spend some time working through a digital resource I have been using for so many years.

[12:20] Why I don’t tweet…The previous paragraph is 958 characters.

[1:30] Went on walkabout around Niagara-on-the-Lake with an iPhone researching a mystery from the war of 1812. This application (still in beta), created by our conference host Kevin Kee, is just the sort of thing Tom Scheinfeldt, Josh Greenburg, and I envisioned something like four years ago in the days before the new generation of smart phones. Ours was going to be “Stop Booth” and would give you a chance to traverse the historical/geographical space of D.C. in an quest to save President Lincoln from his assassin, but a combination of technological limitations and a lack of funding kept us from ever pursuing this idea. It was really exciting to see Kevin’s history quest through town on an iPhone and to imagine all the ways we’re going to be able to take advantage of this platform as humanists.

[3:00] Sat with Bill Turkel to see how RFID tags could be used in humanities applications. He demonstrated a simple (for him) program that would allow an RFID reader to gather data from a tag, then link it to a database of historical information. One idea I had from that demonstration would be to create a “magic wand” that had a reader in the tip that would allow students to wave the wand over an artifact or a bank of photographs to gather information about the thing being examined. If the readers had a greater range, something similar could be done with historic sites–students could wander through the site and as they passed tags, historical content could pop up on their phones. What makes this different from just having a GPS application is that they would have to actually pass close to the object with the RFID reader to get credit for completing some sort of quest in the site.

The big question for all of us at this conference is how all the “play” we are talking about can be connected to the serious purposes of teaching and learning. I’m a believer that there are direct connections, but I also am hard headed enough to insist that those connections be made explicit through data (qualitative or quantitative) that demonstrate how certain kinds of learning takes place during or as a result of play.

Why Assessment Gets a Bad Name

Regular readers of this blog will know that I am actually quite supportive of the whole idea of assessment in higher education. I am convinced that we need authentic forms of longitudinal assessment of learning in all of our programs, especially undergraduate programs, that provide us some sort of reasonable picture of whether our students are learning what we want them to learn and whether they are getting better or worse at it. In this way we can have some sense for whether we are doing the right thing for our students.

Without such assessments we are forced to fall back on either (a) the nods and smiles of our students that are supposed to tell us that they “got it” today in class, (b) their performance on tests and essays that we give them that may or may not be tied to departmental learning objectives, or (c) end of semester student evaluations that, of course, are no measure of learning.

However, the experience we are having right now in my department is a perfect example of why faculty members want to run screaming from anyone who utters the dreaded word “assessment.” You may find this difficult to believe (or maybe you won’t), but we are currently having to undergo five separate assessments of learning in our undergraduate program. How can it be that one department could have to engage in five separate assessments simultaneously? I’ll try to explain…

  1. We have our own assessment (one I helped design) that goes like this: All History majors must take History 300 (Historical Methods) and History 499 (Senior Research Seminar). Each semester we select a random sample of final papers from History 300, put them in a file, and the when those students complete History 499, we pull their final research papers. Then we convene as a group and score each pair of papers on a rubric of historical thinking skills to see if (a) our students are learning what we hope they will learn and (b) if, as a group, they are making progress over time. This is an on-going assessment of learning in our major and one we subject ourselves to.
  2. Several years ago our Provost created an Academic Program Review process that, for us, began in 2008, and will continue every other year, apparently forever. This particular process uses a software platform called the “Weave”. Please don’t ask me to tell you how it works. My colleagues and I figured it out in 2008, but it has already been updated several times and apparently now works entirely differently. I won’t tell you the adjectives used to describe the Weave by a colleague in Cultural Studies (this is, after all, a family blog).
  3. We are now in the first full phases of our decennial reaccreditation by the Southern Association of Colleges and Schools (SACS). For now the SACS process is all about making sure we collect credentials and syllabi from our faculty and about creating a Quality Enhancement Plan (QEP). I am part of the QEP steering committee for the University and the end result will be quite good. Getting to that result is going to be painful at times, but the benefits for our students will make it worthwhile.
  4. We are undergoing an assessment of the general education curriculum. I haven’t been able to determine the mandate for this particular assessment, but for now I’ll refer you to my previous post for some insights into my thinking about general education. The short version is that I think distribution requirements are a good thing. Mandating particular courses (and thereby stifling student choice) is a bad thing.
  5. The State Council for Higher Education for Virginia has mandated an assessment of how all colleges and universities in the state are helping our students become better writers. For this particular assessment a group of seven or eight faculty were asked to come together to use the rubric we use in our assessment of historical thinking skills (see #1 above) to assess student writing. I had to leave that meeting well before it was over, which is probably why I’m still unclear how a rubric designed to measure historical thinking can be used to measure writing. Moreover, I’m unclear how having each academic department in the University measure writing with their own rubrics will yield data that can be aggregated in some sort of meaningful way. But maybe that’s just me…

Okay, got all that? What we have here is one assessment generated by the department, two assessments coming from the Provost’s office, and two from outside agencies with some level of supervisor authority over us.

As much as I seem to be complaining (because I am) that we have so many assessments going on at once, I want to reiterate that I am sympathetic with the need for each one of these. I can’t argue with the need to know all the things that these different assessments are after.

But–and I think this is a very important but–the last time I checked, faculty members were first and foremost supposed to teach their students and second were supposed to produce high quality research. Of course, we also engage in lots of departmental, college, and university service (not to mention community service). Even with those mandates, we must make time for at least some assessment, but five assessments? All at once?