The final full session of the pre-conference workshop at the AHA annual meeting, three presenters showed us work in progress.
Jan Reiff asked whether we have moved beyond Web 2.0 to Web 3.0 (transforming all of the web into a database, the Semantic Web, etc.)? (Not in the history business, I’d say–but a few of us are pushing in that direction.) What was exciting in her presentation was the beginnings of the “hyper city” project for Los Angeles. She compared it to the Hypermedia Berlin project, which I have to say I find very limited and not especially useful. By contrast, the project Jan described seems much, much better.
My critique of the Berlin project is that while the graphics are nice, the content is not great (especially given how much money was poured into the project). Click on any of the map icons and you get an (unsourced) image, a brief and not very insightful text, and no method for interacting with the data. What makes the LA project much more appealing is that it is built not in closed environment (the Berlin project seems to be a java site), but in an open platform (Google Earth) with multiple databases linked to the main platform. In this way, users of the LA site will be able to play with and (possibly) add to the existing information in the project. By contrast, users of the Berlin site are stuck with whatever the site creators provide–and only that information.
Will Thomas talked about a project that focused on the railroads and the building of modern America. This project, he argued, is also a metaphor for the history of the Internet–an intriguing parallel to be sure. Like Jan’s project, this one includes a lot of use of GIS. What makes his project so appealing is that he, like Jan, does not conceive of it as an “archive” or database, but rather as a “research platform.” By that, he means exactly what he says–a platform for researchers, whether students or scholars, to pour data into and work with.
Stefan Tanaka showed early iterations of his “Annals” project, which provides access to both translations and facsimiles of Meiji era documents. The project was quite interesting, although the current interface is a clunky frames version (although, to be clear, what he showed was just a draft of the project). What does work in the project is the ability to see multiple narratives on the same event.
During the Q&A, I asked if the first two projects would be open to public contributions. In both cases, the answer was a “limited yes,” by which I mean that Jan and Will were both open to the idea of allowing others outside their projects to pour data into what they are working on, but neither had yet figured out the mechanism for how one could manage that process. One suggestion I have is that the communities of the interested (local historical commissions, railroad enthusiasts, etc.) be made responsible for policing the “outside” datasets. By that, I mean that if they want to be able to pour data in, they also have a responsibility for reviewing and monitoring the quality of those data.
The discussion later turned to student work in various digital media and what constituted “good enough” quality when it comes to video and audio quality–“broadcast quality” as Will Thomas put it. It seems to me that there is a productive tension between wanting things to look at good as possible and wanting them to be “good enough.” Or, as Voltaire put it so long ago, “The perfect is the enemy of good.” (Le mieux est l’ennemi du bien.)