Tag Archives: cognitive research

Rebuilding a Course Around Prior Knowledge

Of the many different courses I teach, the one I’ve made the fewest changes in over the past decade is my survey of modern Eastern Europe. Every other course I teach has been reconfigured in various ways as a result of my research into the scholarship of teaching and learning, but for some reason, I’ve never gotten around to altering this course. I’m ashamed to say that when I taught it last semester, it was really not that much different from the way I taught it for the first time way back in 1999.

I could offer various excuses for why that course seems so similar to its original incarnation, but really the only reason is inertia. I’ve rewritten four other courses and have created five others from scratch in the past six or seven years and because my East European survey worked reasonably well, it was last in line for renovation.

The good news for future students is that I’ve taught it that way for the last time.

Like all upper division survey courses, HIST 312 poses a particular set of challenges. Because we have no meaningful prerequisites in our department (except for the Senior Seminar, that requires students to pass Historical Methods), students can show up in my class having taken no history courses at the college level. And even if they had, the coverage of the region we used to call Eastern Europe is so thin in other courses, it is as though they had never taken another course anyway. That means I always spent a fair amount of time explaining just where we are talking about, who the people are who live there, and so on, before we get to the real meat and potatoes of the semester.

And then there is the fact that this course spans a century and eight countries (and then five more once Yugoslavia breaks up), it’s a pretty complex story.

To help students make sense of that complexity, over the years I’ve narrowed the focus of the course substantially, following Randy Bass’s advice to me many years ago: “The less you teach, the more they learn.” We focus on three main themes across all this complexity and by the end of the semester, most of the students seem to have a pretty good grasp of the main points I wanted to make. Or at least they reiterated those points to me on exams and final papers. And it’s worth noting that they like the course. I just got my end of semester evaluations from last semester and the students in that class rated it a 5.0 on a 5 point scale, while rating my teaching 4.94.

What I don’t know is whether they actually learned anything.

This semester I’m part of a reading group that is working its way through How Learning Works and this past week we discussed the research on how students’ prior knowledge influences their thinking about whatever they encounter in their courses. This chapter reminded me a lot of an essay by Sam Wineburg on how the film Forrest Gump has played such a large role in students’ learning about the Viet Nam wars. Drawing on the work of cognitive psychologists and their own research, Ambrose et al and Wineburg come to the same conclusion, namely, that it is really, really difficult for students (or us) to let go of prior knowledge, no matter how idiosyncratically acquired, when trying to make sense of the past (or any other intellectual problem).

The research they describe seems pretty compelling to me, especially because much of it comes from lab studies rather than water cooler anecdotes about student learning. Because it’s so compelling, I’ve decided to rewrite my course around the notion of working from my students’ prior knowledge. Getting from where they are when they walk in the room on the first day of the semester and where I want them to be at the final exam is the challenge that will animate me throughout the term.

My plan right now (and it’s a tentative plan because I won’t teach the course again for a couple of semesters) is to begin the semester with three short in class writing assignments on the three big questions/themes that run through the course. I want to  know where my students are with those three before I try to teach them anything. Once I know where they are, then I can rejigger my plans for the semester to meet them where they are rather than where I might like them to be. And then as we complete various segments of the course I’ll have them repeat this exercise so I can see whether they are, as I hope, building some sort of sequential understanding the material. By the end of the semester I ought to be able track progress in learning (at least I hope I will), which is an altogether different thing than hoping to see evidence of the correct answer compromise.

How Heuristics Make History Hard

What do we really know about how our students generate answers to historical questions? Thanks to Sam Wineberg, Peter Seixas, Bob Bain, Stephane Levesque, and others in their orbits, we know a good bit about how K-12 history students reach their conclusions about the past, but when it comes to higher education, we know far too little. In fact, we’re often puzzled by the answers our students arrive at. Why did they assign great importance to a particular piece of evidence when our view is that this piece of evidence was just a of run of the mill source, not particularly worthy of extra attention? Why is it so hard to shake them from their belief that, say, people in the past wanted the same things that people today want?

To date, too many of our answers to these and other such questions have been based on folk wisdom about “kids today” or an over reliance on what we observe in our classrooms as being representative of “all students.” Real research, based on real data, would surely take us much farther down the road toward understanding how our students think.

Fortunately, scholars in other disciplines than history have done some hard thinking about these issues and, just as fortunately, have done that real research generating real data.

It’s not every day that a historian reads an article with a title like, “The Role of Intuitive Heuristics in Students’ Thinking: Ranking Chemical Substances,” but read it you should. [Science Education, 94/6, November 2010: 963-84] The authors, Jenine Maeyer and Vincente Talanquer, proceed from the assumption that the we better understand how our students think, the better our curricula can be. This is an entirely different approach from one that asks, “What should students who graduate with a degree in chemistry/history/sociology know?” That question needs to be answered in every discipline, but if learning is the goal of our teaching, then we must understand how that learning occurs as we design those curricula. To do otherwise is to waste our time and our students’.

Maeyer and Talanquer begin with a question: What are the cognitive constraints that impede their students’ ability to engage in the kind of careful and complex analysis that they want to induce in their courses? Drawing on 30 year’s worth of research from cognitive science as well as classroom research in the sciences, they describe two constraints and four reasoning strategies arising from those constraints. While they are writing about the analysis of chemical substances, a history teacher could very easily substitute “primary sources” and “history” for “substances” and “chemistry” and learn a lot from their results.

The two cognitive constraints they describe are implicit assumptions and heuristics (short cut reasoning procedures). In history, an implicit assumption would be that during the era of the women’s suffrage movement, all women wanted the vote, because of course women would want the vote. These implicit assumptions are very powerful and difficult to break down, in large part because they are so rooted in a learner’s view of how the world is.

Heuristics are the root of many problems in education in whatever discipline, but the authors argue that if students can learn how these heuristics govern their analytical strategies, they can then begin to learn differently. And once that happens, they are more likely to examine their implicit assumptions about the world.

All of us are beneficiaries and victims of our own heuristics. For example, the quick thinking that results from years of driving experience helps us recognize, without even thinking about it, that the car in front of us is about to do something stupid, so we slow down and give the driver room to do whatever he is about to do. The short cut reasoning procedures we develop as drivers lead us to reasonable conclusions at lightning speed.

But our short cut reasoning can also lead to into errors of analysis. Maeyer and Talanquer identify four heuristics that get in the way of the kinds of learning we want to induce: the representativeness heuristic, the recognition heuristic, one-reason decision making, and the arbitrary trend heuristic.

The representativeness heuristic is one in which we judge things as being similar based on how much they resemble one another at first glance. We see this often in our history classrooms as, for instance, when a student leaps to the conclusion that two works of art separated by both temporal and cultural boundaries must be similar because they kind of look alike.

The recognition heuristic is what happens when we look at a number of pieces of historical evidence, but recognize only one of them, and so assign a higher value to the one we recognize for no reason other than that we recognize it. In the history classroom, this happens when a student is confronted with four or five texts, one of which is familiar, and so focuses all of her attention on that text, to the point of deciding that this text is the most important in the group, even if it is not.

One-reason decision making happens when students make their decisions about evidence based on a single differentiating characteristic of that evidence. So, for instance, in that group of four or five texts, our student might decide that because only one of them actually mentioned something of importance that she is studying, it is somehow more important than the other four when trying to figure out what happened back when the texts were written.

The arbitrary trend heuristic is one we see not only in our students, but in the works of our colleagues. Because several historical sources were generated within a few miles of one another, or within a few weeks of one another, we assume that they must, somehow, be connected to one another, without any evidence to support this hypothesis.

All of these heuristics occur at various moments throughout the semester in our classrooms, regardless of the discipline we teach. Not all students utilize these short cut strategies all the time, but most of them deploy one or the other at some point in semester. Knowing that this is the case, we can then design our courses to address these thinking strategies.

I wish someone had assigned me this article 20 years ago. Of course, it hadn’t been written yet, so that wouldn’t have been possible. But if it had, and I’d read it back before I started teaching history, my life would have been so much easier and my student’s learning would have been so much richer.

I Know…Let’s Blame the Students

Sometimes it seems to me that whenever things go wrong in college teaching, the first impulse of the professor is to blame the students. They aren’t prepared for class. They don’t want to grapple with the hard concepts. They don’t want to read what I assign. They do all their work at the last minute.

And now, apparently, laptop computers in class have caused them to stop paying attention.

We’ve all seen it. The student with a laptop who has clearly checked out of lecture. Is he reading his email? Is she chatting with a friend? Is he playing World of Warcraft? And then there are the other students peering covertly or openly at the open screen.

I’m sorry to report that laptops aren’t the problem, nor are students. As Pogo said so many years ago, “We have met the enemy and he is us.”

I’m still not sure how it is that people with advanced degrees that require them to develop sophisticated research skills can so casually ignore mountains of research by serious cognitive scientists that demonstrates unequivocally that lecturing is one of the worst forms of teaching (if the quality of teaching is measured by learning).

A simple summary of that research — and I’ve read a lot of it lately — could be called the 20/20 rule. Study after study shows that when students are lectured at their attention drifts very rapidly and that 20 minutes is about all their brains can tolerate. After 20 minutes, these studies show that the majority of students are somewhere else, with our without the aid of a laptop. And study after study shows that students (even the brightest and most attentive) retain, on average, about 20% of what is told to them in lecture. For a good summary of this research, see Lion F. Gardiner, “Why We Must Change: The Research Evidence,” Thought & Action 14/1 (1998): 71-88.

So instead of blaming our students for wandering away on their laptops, I think it’s time we looked a little more closely in the mirror and asked ourselves why they wander off. That, of course, would require us to admit that too often we (me included more than I’d care to admit) follow the path of least resistance and stand at the front of the room and talk while they take notes. Like any addiction, lecturing is a hard habit to break. If it were easy to stop, I’d have junked all of my lectures by now instead of something like two-thirds. But I’m getting there.

Some like to argue that what I’ve just pointed out is rooted in idealism that can’t be matched by the practicalities of teaching to large classes. Nice try, I say, because plenty of talented educators have figured out how to engage students in active learning even in large lecture halls. Perhaps the best example I know of is Dennis Jacobs, a professor of Chemistry at Notre Dame, whose work on active learning in large lecture classes has earned him many awards, not the least of which is the CASE/Carnegie U.S. Professor of the Year award. If Dennis can do it in introductory Chemistry, I guess I don’t understand why we can’t do it in the freshman History survey.

So let’s take a step back and stop blaming our students (and their laptops). Doing so will force us to think more carefully about our own teaching practice and how we (as opposed to they) might improve.