Higher education has been all aflutter the past year or so about the transformative potential of online and/or distance education mediated through digital media. While the buzz on this topic has waxed and waned since the late 1990s (Web 0.1 for those old enough to remember), now there is some big money behind some of the more interesting attempts to harness that potential.
Nine million dollars in start up grants from the Gates Foundation really puts some oomph behind several of these efforts, most notably the MITx initiative. There is much to be admired in these projects, but it’s less clear to me what this all means for the humanities in general and history in particular. Yes, the MOOCs of the world are drawing in tens of thousands of virtual students for courses such as how to build your own search engine, and the Kahn Academy claims more than 160,000,000 lessons delivered thus far. But the vast majority of the content out there from these types of platforms is in the STEM disciplines.
If you are a professor at almost any college or university in the United States, you know that there are plenty of people on your campus, just as there are on mine, who believe that online/distance education is the future business model for higher education. It’s certainly an attractive one at a place like George Mason, because we are completely out of classroom space, with no relief in sight in the next decade, so if we could convince our students to just stay the heck away from campus, our space problems would be solved.
At this moment, in June 2012, I have no opinion one way or the other about whether online/distance education is really the future of our industry, or like Cold Fusion, it is and always will be the future solution to all our problems.
What I do know at this moment is that no one I’ve been able to find is engaged in serious assessment of the learning that is happening through these courses, especially as compared to other deliver models, whether they are traditional classroom models, or hybrid online/classroom delivery. Given that universities are already pumping untold millions of dollars in the rush to develop these sorts of courses and degrees, and new start ups are popping up almost weekly, it seems to me that we ought to try to figure out just what, if anything, is changing in our students’ learning.
After all, learning is the goal of teaching the last time I checked.
The good news is that, at least in the history business, we know something — a lot actually — about how to assess what and how our students are learning about the past. Those assessment models are not dependent on a particular delivery system and so they can quite easily be applied to the new courses/degrees that are surely to result from the Online Course Tsunami coming ashore on the historians’ coast.
My hope is that one of these big money foundations out there (Bill, Melinda, are you listening?) will set aside at least a little bit of their millions for some serious, scientific assessment of learning gains through these new course delivery systems. Then we’ll have a much better sense for how much time, effort, and emotional investment we ought to make in these models.
No time for assessment. Time to start firing people, starting with the university presidents. Strategic dynamism!
Mills, saw link to this post on Twitter via @dancohen. Responded to him in that forum but wanted to follow up here as well.
I teach part-time at the Univ. of IL Graduate School of Library and Information Science (GSLIS) where an online curriculum began over ten years ago and has enjoyed significant success. I disagree with your point about noone doing serious assessment of learning outcomes from online curricula based on what I know as an active participant in one for the past ten years.
We are an accredited institution and as such, are required to go through a very thorough assessment and review every several years. This is done for all course offerings, traditional as well as online, and there is no difference applied in terms of standards.
The online curriculum (called LEEP) was first assessed as part of the overall school in 2004 and again last fall (2011). In 2004, since it was the first time since LEEP had been created that it was to be evaluated in this in-depth manner, the online curriculum was especially emphasized in assessment documentation. In both reviews, GSLIS passed with flying colors. In fact, it has been lauded particularly for its successful approach with online courses and remains at the top of graduate library schools in North America.
To quote Dr. Linda Smith who is really the driving force behind LEEP and is associate dean, “In accredited programs, we have to demonstrate that our program meets the ALA Standards for Accreditation, regardless of mode of delivery. Every time we are reaccredited, that is confirmation of our success in achieving this.” Every time I teach I am formally evaluated and provided with feedback on the course by students, and these inputs are taken very seriously by the school.
After having written all of this in response, I want to be careful to ensure that I am not misunderstanding your overall point. If I am, apologies and please explain further. My experience in this particular situation leads me to believe online education can be done very well from a learning assessments perspective. Dr. Smith told me separately that “…one can do online education poorly or well–it is not inherently good or bad.” I agree.
I don’t believe a “serious assessment of the learning that is happening” in traditional course has been done. In fact, what has been done indicates that it’s not worth all that much. Since the 60s there has been one major sociological/anthropological study of the university a decade starting with “Boys in White” and the latest being “My Freshman Year” and it consistently showed that “teaching” is more of a side activity at universities and “learning” is at best a happy byproduct. So if “learning was the goal of teaching” the last time you checked, I don’t think you checked very carefully.
That doesn’t mean that students don’t come across transformative and profound courses like yours but the evidence shows that that is the exception not the rule.
You need to compare the new models with an honest assessment of the current models – not the best examples.
I recently taught a similar course as classroom-based and fully online and the learning happening online was much more in depth as the results showed – students couldn’t hide behind attending classes. All their engagement was learning.
But that was because the online course was designed to be high engagement to compensate for the lack of classroom interaction. There were no efficiencies of economies of scale. The only purpose was access, not savings.
A system aimed at savings will produce predictable results. The mode of delivery most likely has relatively little to do with it.
Thanks for bringing up this topic, Mills. I am very troubled by the idea of brick and mortar universities using online courses to solve the problems of rising enrollment, dwindling classroom space, and need for revenue. Having taught a course online at an ivy league university, I don’t think such courses are good solutions for any of these problems–that is, if universities are still prioritizing learning. A virtual classroom will never be a substitute for the personal relationships and intellectual interaction that are so vital to the college learning experience.
That said, I think there are cases when online courses are a good idea. I had students doing summer internships from California to India taking my course, and having it available online gave them access to a class with other students at their own university while not being physically present. This was a history survey course and all of the students were taking it to fulfill general requirements. When designing the syllabus, I was told to have the same workload and expectations of any other course at the university, which I did.
I think the term “online course” may be too broad for the range of things actually going on, though. The course I taught came about as close to a physical class as you could get. There were lectures videotaped from an in-person iteration of the course and there were two class meetings a week in a virtual meeting room with microphones and the ability to share documents and break into separate group meeting rooms. I led discussions of the readings one day a week and primary source analysis activities the other. I see little similarity between this and the chance to watch powerpoint or video lectures at your leisure. Not to say the latter has no merit, but we need to distinguish between open learning opportunities online and online classes with assignments, feedback, discussions, and grades. Both will expand the availability of knowledge, but neither will ever replace learning with others in person.
Thanks to everyone who has chimed in so far on this topic.
First, to Steve: I’d love to see what sorts of assessments you do for these courses/curricula you mention. Because I’ve been working in the area of assessment of learning in history courses since the late 1990s, I am always a bit skeptical of how these assessments are done, but am also very pleased to see good assessment when it happens. For instance, are you doing pre-testing of the students to find out what they already knew walking into the course? I always ask this question because I think too often we claim credit for learning in our classes that has already happened elsewhere.
Then to Dominik: I suppose I’m a bit less pessimistic than you are, but not entirely. Far too often, the goal of teaching is to complete the course, not to expand learning. It’s not that faculty don’t want learning to expand in their students, but rather, the design and delivery of their courses is such that learning is unlikely to expand. I do think you make a very good point about comparing apples to apples.
Finally, to Cassandra: I’m quite sympathetic to the point you make here, because one of the reasons I have avoided the online course environment is because I prefer to be in the same room with my students. Why? Because I like the personal interaction with them in the analog world and so would miss them if they weren’t around. But more to the point, or your point I should say, is that you make a very good point about online courses mirroring analog courses. In general, I think there are serious problems with the way we teach history in the analog world and so setting up online courses to be like those analog courses strikes me as problematic. Instead, I think we need to think through very carefully how a course might be different in the online space — and how that difference might promote more, better, or different ways to learn about the past.
Thanks everyone for these thought-provoking comments.
[I’m a different person from Cassandra Good, above]
I’m concerned about assessment, too, but also about another issue you raise: the quality of assessment. My reaction to your post is “be careful what you wish/ask for” — or, perhaps, be very precise about what you want, since “assessment” can be a double-edged sword, eroding as well as promoting quality.
As Steve points out above, assessment is very much part of the accreditation process, and, in the wake of the online-“university” student-loan scandals of the past few years, accrediting agencies are taking a very close look at online classes everywhere. If it hasn’t already happened at your university, that’s probably just because it hasn’t undergone re-accreditation recently. When it does, close scrutiny of online classes, and their comparability to traditional ones, will undoubtedly be part of the process.
The problem, then, is what we want to see assessed, and how it can be assessed. As you point out, content knowledge is relatively easy to assess, via things like pre- and post-tests (either within a course, or over the course of a student’s college career, though the latter is, of course, complicated by transfers). However, many of us who teach college (and K-12) classes are aiming at a lot more than content knowledge (and you certainly seem to belong in that company, given your other pedagogical posts, which I’ve read with pleasure and interest).
Content/concept mastery is relatively easy to assess via the sort of multiple-choice pre- and post-tests that publishers are delighted to build into online course “packages” (and LMS producers provide “tools” to create, for those of us who insist on being old-fashioned and creating “course content” ourselves). Because such assessment is relatively cheap and easy, if we call for more “assessment,” without specifying what we mean, that kind of basic content/concept-focused assessment is all that we’re going to get. Even more important, because the tools available for online courses, and the approaches to structuring online courses which universities allow and/or encourage, are increasingly being designed precisely to allow for “assessment,” usually of the relatively simple kind, there’s a real danger that calls for more assessment will, in fact, lead to the dumbing-down of online courses, with more emphasis on content/concept mastery, and less on skill development.
If we want high-quality assessment, as well as high-quality learning (more skill- than content-focused), we need to be very specific in saying that that’s what we’re looking for, and want to measure. Both are possible in online courses, but, like high-quality assessment, and learning, in a traditional environment, they’re labor-intensive, and hence expensive. Online learning can solve some problems (e.g. the classroom space crunch you mention, or the commute-to-classroom-time ratio that may discourage some students), but it doesn’t solve the basic problem that, if real learning is to occur, both teachers and students need to have substantial time to devote to the endeavor. It’s entirely possible to conduct — and assess — that sort of demanding course online, but that’s not the direction in which most online learning programs are headed, in large part because those who promote them are hoping that online learning will not only free up classroom space, but also save money.
I suspect you’re actually thinking about measuring higher-order skills, as well as content/concept knowledge, when you write “assessment,” but I’m a bit concerned when you boil assessment down, in your comments above, to what students “know” about history. If you describe what you want that way, without elaboration, you’re going to find plenty of people ready to sell you (and/or those who assess your courses, who, for those of us who teach online, often turn out to be people outside our departments) a pre-packaged “solution” for doing just that. If you also want to understand *how* students know (and how they decide what they know, and why, and with what degree of certainty, in what context, etc., etc.), and how that changes over time, and you want your department to have courses, online and off, that promote such skills, then you’re going to have to be very insistent that the assessment instruments being used to evaluate those courses (which have a tendency to become the tail that wags the dog) are capable of answering such questions (to the degree that they can be answered — which is, of course, like the historical questions themselves, probably not 100%).
Hi Cassandra (#2). Thanks so much for this detailed comment. I was out of town when it came in and have only just returned to the grid and read it.
I agree completely with your main point — that assessment is too often done at the most basic level and that these rudimentary forms of assessment (measuring content acquisition) are then used as proxies for success or failure. A good primer on the dangers of such an approach for historians is Sam Wineburg’s “Crazy for History” (Journal of American History (2004) 90/4: 1401-14).
As you suspect, I am talking about higher order thinking skills even more than content knowledge. In the humanities, we can debate forever which fact students really ought to know, so I find the “which content” debate to be a very sterile and boring one. In other disciplines, it is more the case that students must know certain things. For example, I want my nurse to know that I have two kidneys and one liver and not the reverse. But do students of modern history REALLY need to know that the battle of Waterloo happened in 1815 as opposed to 1814? Of course they should know this, but no one will die if they don’t…and they can look it up at a more leisurely pace than, say, a nurse in the ER trying to remember “one kidney, two kidneys, which is it???”.
Thus, I think all learning assessments (as I have written in a number of places previously, including in this blog) need to be very subtle and sophisticated if they are going to provide us the information (as opposed to data) that we want and need to make good decisions about how we teach our students. I don’t think we do a wonderful job of this in the analog world, but what we do in various colleges and universities is getting better in fits and starts. At a minimum, I’d like to see the best practices in the assessment of historical learning being applied to online only and hybrid courses to see whether the results are similar or different.
You also mention accrediting agencies. Having just spent more than 18 months preparing for and then living through our decennial reaccreditation here at George Mason, I can say categorically that the process and the results were among the most costly and absurd exercises I have ever been through in almost 30 years working in higher education. I have zero respect for the process or for the results of that process–at least in our specific case–and would gladly see the entire system of regional accrediting associations junked tomorrow. If budget cutters in state legislatures had any idea, any, how much of the tax payers money is being wasted on a meaningless and virtually results-free mess, they would pass a law tomorrow requiring their state systems to abandon the reaccreditation process as it currently exists.
What would the result be for the various state universities in Virginia if all of us abandoned the Southern Association of Colleges and Schools on July 17, 2012? The Association would revoke our accreditation, that’s what would happen. And what would happen then if all the state university presidents responded with an email that said, simply, “Noted.”? Would George Mason cease to be an excellent institution of higher education? How about the University of Virginia? Or William and Mary, or Virginia Tech? Would students stop attending? Somehow, I doubt it.
Can you tell that I would like to rant on this subject for hours? Instead, I’ll stop there and say thanks so much for the thought provoking comment.
1. A university would lose those students on, or expecting, federal aid if the university abandoned accreditation. Federal student aid requires accreditation for approval.
2. If the current formal assessment (accreditation) applied to B&M learning institutions is a waste of time and money, a request for “serious, scientific assessment of learning gains through these new course delivery systems” is unreasonable, since it asks for a higher standard than is applied to existing institutions today. More logical would be a demand to apply the easy and cheap “content/concept mastery” assessment everywhere – as part of an improved accreditation program – and then build from there.