Regular readers of this blog will know that I am actually quite supportive of the whole idea of assessment in higher education. I am convinced that we need authentic forms of longitudinal assessment of learning in all of our programs, especially undergraduate programs, that provide us some sort of reasonable picture of whether our students are learning what we want them to learn and whether they are getting better or worse at it. In this way we can have some sense for whether we are doing the right thing for our students.
Without such assessments we are forced to fall back on either (a) the nods and smiles of our students that are supposed to tell us that they “got it” today in class, (b) their performance on tests and essays that we give them that may or may not be tied to departmental learning objectives, or (c) end of semester student evaluations that, of course, are no measure of learning.
However, the experience we are having right now in my department is a perfect example of why faculty members want to run screaming from anyone who utters the dreaded word “assessment.” You may find this difficult to believe (or maybe you won’t), but we are currently having to undergo five separate assessments of learning in our undergraduate program. How can it be that one department could have to engage in five separate assessments simultaneously? I’ll try to explain…
- We have our own assessment (one I helped design) that goes like this: All History majors must take History 300 (Historical Methods) and History 499 (Senior Research Seminar). Each semester we select a random sample of final papers from History 300, put them in a file, and the when those students complete History 499, we pull their final research papers. Then we convene as a group and score each pair of papers on a rubric of historical thinking skills to see if (a) our students are learning what we hope they will learn and (b) if, as a group, they are making progress over time. This is an on-going assessment of learning in our major and one we subject ourselves to.
- Several years ago our Provost created an Academic Program Review process that, for us, began in 2008, and will continue every other year, apparently forever. This particular process uses a software platform called the “Weave”. Please don’t ask me to tell you how it works. My colleagues and I figured it out in 2008, but it has already been updated several times and apparently now works entirely differently. I won’t tell you the adjectives used to describe the Weave by a colleague in Cultural Studies (this is, after all, a family blog).
- We are now in the first full phases of our decennial reaccreditation by the Southern Association of Colleges and Schools (SACS). For now the SACS process is all about making sure we collect credentials and syllabi from our faculty and about creating a Quality Enhancement Plan (QEP). I am part of the QEP steering committee for the University and the end result will be quite good. Getting to that result is going to be painful at times, but the benefits for our students will make it worthwhile.
- We are undergoing an assessment of the general education curriculum. I haven’t been able to determine the mandate for this particular assessment, but for now I’ll refer you to my previous post for some insights into my thinking about general education. The short version is that I think distribution requirements are a good thing. Mandating particular courses (and thereby stifling student choice) is a bad thing.
- The State Council for Higher Education for Virginia has mandated an assessment of how all colleges and universities in the state are helping our students become better writers. For this particular assessment a group of seven or eight faculty were asked to come together to use the rubric we use in our assessment of historical thinking skills (see #1 above) to assess student writing. I had to leave that meeting well before it was over, which is probably why I’m still unclear how a rubric designed to measure historical thinking can be used to measure writing. Moreover, I’m unclear how having each academic department in the University measure writing with their own rubrics will yield data that can be aggregated in some sort of meaningful way. But maybe that’s just me…
Okay, got all that? What we have here is one assessment generated by the department, two assessments coming from the Provost’s office, and two from outside agencies with some level of supervisor authority over us.
As much as I seem to be complaining (because I am) that we have so many assessments going on at once, I want to reiterate that I am sympathetic with the need for each one of these. I can’t argue with the need to know all the things that these different assessments are after.
But–and I think this is a very important but–the last time I checked, faculty members were first and foremost supposed to teach their students and second were supposed to produce high quality research. Of course, we also engage in lots of departmental, college, and university service (not to mention community service). Even with those mandates, we must make time for at least some assessment, but five assessments? All at once?
I think it would be great if this county could tie assessment in higher education to the reforms that will be taking place next year in K-12 education. There is a lot to admire, for instance, in the goals outlined for this new federal grant program:
http://www.ed.gov/programs/racetothetop/index.html
Wow, that’s a lot of assessment. I’m impressed that your department has developed its own process for assessing its majors (#1 above). From my experience with the SACS reaccreditation process (#3 above), I would think that your existing departmental assessment of historical thinking skills could be used for the SACS process. They’ll just want to know what you plan to do if you find out that your students aren’t learning the skills you’d like them to learn.
And since you already have a process in place to assess your majors’ historical thinking skills, adding a second rubric to assess their writing skills (#5 above) would be relatively easy to do, I think. I see your point about the difficulty of assessing writing skills with your existing rubric, but a second rubric could do the job well. You’ve already got a process for selecting and evaluating student work as well as, apparently, some faculty buy-in to that process. That’s the hard part of program-level assessment in my experience.
So with a little extra work, items 1, 3, and 5 above could be combined. I guess that’s a little better! Five initiatives does seem to be a little crazy-making.
I don’t know about anyone else, but I didn’t get a Ph.D. to become an assessor. I wanted to be a professor. At my institution, assessment stands directly in the way of being a professor by overloading us with assessment work to the point that we can teach, assess, and do service; teach, do research and assess; or assess, do research and service. We have DAYS of assessment work to do for EACH CLASS and EACH SECTION EVERY SEMESTER. It doesn’t help us be better faculty members, it doesn’t help our students gain any competencies whatsoever, and our copy budget is swallowed up in all this assessment bullcrap. Whoever came up with this dumb idea should just go away and stop interfering with our profession. My experience is that all assessment attempts are a truly dumb idea. Outcomes based education was the crap that was sprung on secondary education, and ever since the 70s, it has only served to degrade the quality of graduates our secondary schools produce. Now it’s happening in universities. Pull the plug on it already and let us be professors.