Several years ago I took a group of Mason students to Prague, Vienna, and Budapest. Among the things I’d planned for them was a visit to the Klementinum in Prague where the Codex Gigas (the “Devil’s Bible“) was on display. Needless to say, when I told them we were going to a library to look at a book, they were decidedly underwhelmed. Until they saw it up close and personal.
At 90cm x 50cm and weighing in at 75 pounds, it’s quite a book and was unlike anything they had seen or expected. More intriguing to them, though, was the legend surrounding the work. Created sometime between 1200 and 1230 in a monastery in Bohemia, the story that goes with the bible is that the devil himself helped a monk create it in just one night. In exchange, the monk included an image of the devil as part of the text decoration. Despite their earlier reluctance to go look at a book, the students pronounced the whole thing kind of cool.
I was reminded of that trip the other night during a tutorial I’m leading with four of our most talented doctoral students. One of those four, Jeri Wieringa, asked one of those questions students ask us with regularity that makes us think really hard. I’ll paraphrase what she asked: “If we digitize texts and present them to students as just so many pixels, are they losing an essential connection to the text as a historical artifact?”
This question led to an energetic discussion around our table. On the one hand, there are obvious advantages to digitizing texts. At the most obvious, the texts, especially those before the age of the typewriter, become much more legible and so therefore accessible to a wide audience. Anyone who has taught pre-typewriter texts knows just how reluctant students can be when it comes to trying to make sense of handwriting from back in the day. Even excellent tutorials like the one on decoding Martha Ballard’s diary can reinforce the notion that such handwriting is essentially unreadable except by experts or code breakers.
A second obvious advantage is that the text becomes fully searchable in ways that it can’t be when it is just an image of a document. Our Papers of the War Department project here at RRCHNM is a great example of the advantages of having transcribed texts to sort through and analyze using the text analysis algorithm of your choice.
Finally, making the text available in this way opens up any digitized collection to crawling by the various search engines, thereby opening up the collection to a much larger audience.
But, and this was the but that we got stuck on in our discussion, the artifact itself can disappear from the view of the researcher if an image of the original is not also available to the researcher. We really liked the War Department project because that image is there for users to see any time they want. [NB: I edited this paragraph because in the original, my wording made it sound as though images weren’t available on the War Department site.]
To put it another way, the coolness of the text as artifact disappears when all the researcher/student sees is black pixels on a white screen. Yes, it’s much more readable and accessible. But there is a bigger potential problem–and this is the one that really troubled Jeri. An essential task of the historian is to assign greater or lesser value to a particular historical source based on his/her growing expertise in a given subject. Some documents are just more important to a given problem or interpretation than others and it’s up to us to help others see that.
But if all documents are reduced to black pixels on a white screen, they start to seem all the same. Given that students/novice historians often have a difficult time placing sources in a hierarchy of importance that they are developing, if all texts look the same, are we making it more difficult for them to develop this skill of prioritizing some sources over others?
We arrived at no answer in our conversation and despite two weeks of ruminating on the issue, I still don’t have one. I’m just going to have to worry about this one for a while longer.