Responding to the Technophobes and the Technophiles

For our weekly meetings with Dr. Robertson, the DH fellows each find an interesting new post about digital humanities to share with the group. Over the course of the year, we’re supposed to track some sort of theme or trend. Last year, I focused on posts and projects that work to Indigenize and/or decolonize digital humanities. This year, I’ve followed the ways that the field has responded to criticism from other (non-digital) humanists. At the same time, I’ve been interested in how digital humanists have worked to challenge the rhetoric of tech triumphalists. It strikes me as a problem that even though digital humanists spend so much of their time denouncing technophilia—uncritical enthusiasm for technology—they continue to be accused (en masse) of being technophiles themselves. I am fully onboard with critical DH work, but that work needs to be well-informed to be constructive. It’s unfortunate that the field is still being criticized by apparent techno-cynics who have not bothered to do a basic review of the literature.

With corporations trampling over privacy concerns, ed-tech companies touting ways to replace teachers with machines, and university administrators treating makerspaces as places to push STEM-oriented entrepreneurship over more humanistic endeavors, we need humanists who can engage with data and speak the language of tech more than ever. I have juxtaposed articles that show how exhausting it is to defend the basic premise of digital humanities over and over with articles that show digital humanities is as much about humanizing the digital as it is about doing the humanities digitally. Maybe next year we can stop wasting time addressing the same worn-out, flimsy criticisms and focus our attention where it matters, whether that’s finding ways to decolonize digital archives or pointing out the pernicious realities of the tech world’s utopian promises.

Why Are Non-Data Driven Representations of Data-Driven Research in the Humanities So Bad? by Andrew Piper. In this post, Piper explains that humanists are understandably concerned with “the hegemony of data and data science today,” especially because of the way that the press often treats data-driven arguments as “social certainty.” However, Piper expresses frustration with how people who don’t use data misrepresent data-driven arguments. Because non-data driven arguments are “not subject to the same rules of evidence,” Piper explains, “If you don’t like data, it turns out you can say whatever you want about people who do use data.” It is important that we continue to critique work in the digital humanities, but it’s also important that these critiques are well-informed and based on evidence, not bias.

Twitter’s Response to “The Digital Humanities Bust” by the editors of Digital Humanities Now. A piece in the Chronicle of Higher Education, titled “The Digital-Humanities Bust,” sparked a conversation on Twitter with its claim that the field is all hype and no substance. Because there weren’t yet any blog-length posts, the editors of DHNow (myself included) chose to highlight the tweets. I felt like the Twitter format really served its purpose here. Choosing to respond through tweets that mainly linked out to existing articles, blog posts, and digital projects sent the message that there’s really no need to try to defend digital humanities yet again from the usual tired criticisms; the case for digital humanities has already been made, and those who want to challenge it should at least read up on the basics first.

What is (the value of) Digital Humanities (again, again, again, again…sigh) by Katherine D. Harris. This is a longer post responding to “The Digital Humanities Bust,” with particular attention to the comments section. With exasperation, Harris asserts, “The issue here is not one of critique, but instead a demand that Digital Humanists stand up, be counted, and volunteer to be shot. In no way is the Chronicle article an attempt to do anything other than declare the death of Digital Humanities, or perhaps it’s the author’s wish fulfillment.”

The Disappearance of Books Threatens to Erode Fine Arts Libraries by Sarah Emily Bond. Bond discusses the decision by a dean at the University of Texas at Austin to relocate books and other materials from the Fine Arts Library to off-site storage to create room for a makerspace called The Foundry. According to Bond, The Foundry is the first step in the plan for a new School of Design and Creative Technologies. The dean discussed this plan at the 2018 SXSW Interactive Conference, “suggesting that ‘fine’ arts is an anachronism and should yield primacy to more entrepreneurial, STEM-oriented creative arts such as video game design.” By disconnecting the fine arts from its history, the removal of books represents not just an attack on print resources but the discipline itself. As she explains, “removing books, special collections, and other primary materials from a library space often removes essential sources of inspiration from students’ immediate view.” Bond emphasizes that makerspaces are not panaceas but argues that we need libraries where the digital and the analog coexist, a point she reiterates in a follow-up post on her blog.

Lies, Damned Lies, and Statistics: Some Thoughts from ASU+GSV by Audrey Watters. In this post, Watters reports on some of the “inaccuracies and misinformation” she heard at ASU+GSV, an ed-tech summit. Highlights include that she “heard someone claim that kids learn everything from YouTube these days so they don’t need what’s taught in school” and “heard three different people repeat that old Arthur C. Clarke adage that ‘any teacher who can be replaced by a machine should be.’” Watters says she wanted to scream at the entrepreneurs and investors spreading these lies, but she also “wanted to scream at all those reporters and all those pundits who uncritically repeat these stories too and at all those educators who readily take it all in.”

One Best of All Possible Learning Conferences by Nathan Loewen. This write-up on the 2017 HASTAC conference couldn’t stand in starker contrast to Watters’ experience at ASU+GSV. The theme of the conference was “The Possible Worlds of Digital Humanities,” but, demonstrating that digital humanists are far from technophiles, Loewen finds that, “the events at HASTAC repeatedly demonstrated how applications of humanist skepticism to technology may promote excellence in digital teaching, learning and scholarship.” Loewen describes a focus on “collaboration and openness,” not on uncritical use of digital tools. “Instead, the very structures of digital tools should be interrogated for their implicit biases.”

Can We Do Better Than a 10 Year Gap in Knowledge (re: digital privacy, ethics, etc)? by Jacqueline Wernimont. Mainly in response to the recent Facebook-Cambridge Analytica data scandal, Wernimont expresses her frustration with “the enormous time gap between scholarly understanding of the privacy issues in digital media technology here in the U.S. and public and political awareness.” She explains that she is “flummoxed” that smart people are just now understanding the profiling practices of Facebook (but still don’t seem to realize it’s more than just Facebook) despite the work of academics like herself to draw attention to data monetization, privacy issues, and “the incredible risks that some people face just in engaging online.” Wernimont has more questions than answers in this post. She asks, “Anyone have ideas about how we can make sure that we don’t have another 10 year gap in knowledge?”

Teaching Underrepresented Students How to Navigate Higher Ed Via Digital Humanities by Elizabeth Losh, featuring an email interview with Marisa Parham. With so much (understandable) vexation in this list, I end with a post that speaks to the possibility for digital humanities to redirect our society’s currently misguided STEM obsession and make a better, more just digital (and non-digital) world. Parham explains how K-12 educators can use digital humanities to serve the needs of underrepresented students, including to help them learn to be both digital creators and better digital consumers. Instead of calling for more STEM instruction, Parham makes the case for better STEM that is more fully connected to the humanistic and social scientific. As Parham’s interview makes clear, digital literacy is crucial (especially for underrepresented students), and computational/quantitative/digital humanities should be a part of how students develop digital literacy.

Spring 2018 in the Research Division

My second semester in the Research Division—the final semester of my fellowship—has come to a close. Most of my time was spent working on Digital Humanities Now and PressForward. Both have seen a few changes this spring, but the most exciting news is that we launched the redesign for the PressForward website!

The new website had been in the works for over a year. During our rotation into the Research Division in our first year of the fellowship, Jessica and I actually brushed up on our HTML to create some mockups for the site, so it’s been a great learning experience to see the process from start to finish. Many of the current design elements are things that Jessica and I came up with. It’s rewarding to look at the final product and see how our contributions helped make it happen. Last semester, I continued working on the website by creating/editing some partner profiles. This semester, the rest PressForward team and I spent part of a day gathered together at one table to manually move the website over (it was the best way to do it given our). Now that the website is up, you can also see the presentation slides I created last summer.

Throughout the semester, I also served as Site Manager of Digital Humanities Now, formatting the posts selected by the Editor-in-Chief, managing email, and running DHNow’s Twitter account. I also took a few turns of my own as Editor-in-Chief, choosing posts for publication, like this one and this one. Because DHNow follows so many RSS feeds, we run into a lot of issues with feeds that break for one reason or another. One of the things I worked on this semester was trying to determine why feeds that seemed to be active and functioning properly were still breaking in DHNow’s PressForward plugin. I won’t bore you with specifics, but I was able to provide our developer with information that allowed him to identify an issue.

Finally, I continued to do testing for PressForward. The main improvement we had to test was a new email notification feature that makes it possible send an email to a designated list of recipients every time a post has been nominated. You can learn more about the latest release here. When I first started testing last semester, it was unfamiliar and intimidating. Now I’m much more comfortable with the process and am always glad to hear that we have new things to test. I also got to do some testing for the user management plugin that Amanda Regan originally created for DHNow and wanted to make available to another publication team.

Overall, it’s been another fun and informative semester at the Center. I still can’t believe that my time as a Digital History Fellow is over. The whole experience has been such a complete immersion in the world of digital humanities. It was easy to groan at having to find a post to talk about at our weekly DH Fellow meetings, but I really think I’ll miss those meetings. They provided a reason to stay on top of the latest DH news and an opportunity to think and talk through critical issues. I hope that I’ll continue following the latest developments and ideas as closely as I did as a fellow. Although I’m saying goodbye to the fellowship, it’s not quite the end of my time at the Center. I’m looking forward to spending the summer doing more work on PressForward and serving as full-time site manager and Editor-in-Chief of DHNow.

The Digital: Inside and Outside the Academy

Over the course of the academic year, I used the process of selecting, nominating, and discussing pieces for DH Now to explore the ways in which digital tools and methods are used inside and outside the discipline of history. At the beginning, my exploration was explicit: I looked for articles that discussed the tools and methods at the heart of the creation and development of new disciplines or fields of study, such as digital sociology, critical university studies, and black code studies. What interested me most about these articles was their emphasis on the ways in which these disciplines and fields are unique, provide greater contexts for their larger fields, and are inherently interdisciplinary in their approach.

New fields aren’t forming every week, however. My interest in the ways digital tools and methods functioned inside and outside the discipline evolved throughout the year to examine projects that challenge prevailing notions of how these tools and methods are or should be used. From crowdsourcing projects to digital storytelling projects to digital maps, digital methods and tools are being used not just to present new arguments, but to address erasures and silences in both the historical record and everyday American life.

In Why is Digital Sociology?, Tressie McMillan Cottom defines the space the field of study occupies—a space unoccupied by traditional Sociology, a space different from that of Communication Studies or Information Studies, but a space that should be engaging with other interdisciplinary fields such as Women’s and Gender studies, Black Studies, and Latinx Studies. McMillan Cottom asserts that Digital Sociology exists because digital societies exist, and they must be examined and explained: “Our job is to understand the means by which, the conditions under which, the context of internet technologies. We study process.”

Introduction: Wild Seed in the Machine opens the special issue of The Black Scholar titled Black Code. Jessica Marie Johnson and Mark Anthony Neal articulate how Black Code Studies directly challenges those “conceptions of the digital that remove Black diasporic people from engagement with technology, modernity, or the future.” Black code studies centers both blackness and the digital both inside and outside academia.

Digital Humanities as Critical University Studies provides an overview of a panel presented at #MLA2018 by Roopika Risam, Beth Seltzer, and Matt Applegate. Each scholar focuses on a different topic: the ways that DH practitioners view campus communities; an analysis of alternative career skills in academic job postings; and tracing an alternate genealogy of DH practice that clarifies its focus on diversity & inclusion. Yet, the intersections are clear—the panel examines the way that DH work has reconfigured the use of academic labor is a response to campus austerity measures, and it attempts to align DH’s commitment to diversity and inclusion with the work of critical university studies.

Matt Vetter, Theresa McDevitt, Dan Weinstein, and Ken Sherwood ponder on the Wikipedia edit-a-thon as a digital intervention in Critical Praxis in Wikipedia: The Art + Feminism Edit-A-Thon. They discuss the gender gap present in Wikipedia’s editor corps (87% identify as male), and the subsequent systematic bias present in the free online encyclopedia. An edit-a-thon, held in conjunction with the Art + Feminism campaign, was an interdisciplinary direct response to the gender gap, supported by faculty and staff from across the university.

Lorena Gauthereau’s talk and Zotero workshop, Decolonizing the Digital Humanities discusses the archive as part of the colonial project: providing a framework for national knowledge & history, but also creating silences, specifically the erasure of indigenous histories and languages, as well as the presence people of Mexican descent from the Texas side of the Texas Revolution. As part of the Recovering the U.S. Hispanic Literary Heritage project, Gauthereau’s talk and workshop focused on decolonization, and the questions DH practitioners must ask themselves to ensure that their projects are not reinforcing colonial thinking structures.

“Seeing White” is a series of podcast episodes out of Duke University’s Center for Documentary Studies. It uses uses history, politics, art, and personal experience to create a critical and comprehensive narrative of white America. While Eileen Cormier’s article How “Seeing White” is using audio journalism to critically examine whiteness in America explores the ins and outs of creating the series and focuses more on the direct connections to journalism, what is most useful here in this discussion is the interdisciplinary nature of digital storytelling.

A Digital Map Leads to Reparations for Black & Indigenous Farmers traces the experiences of African and Native American farmers receiving “people to people reparations.” This food and agricultural justice project, which evolved out of controversy surrounding white acknowledgment of their privilege in the food system at the Young Farmers Conference, is rooted in both history and cultural anthropology.

The Components to Create Local Linkages: Working in the Public Projects Division

My work in the Public Projects Division was rooted in Creating Local Linkages, a project designed to introduce public historians to historical research methods and digital history skills. The project team will work to develop in-person workshops, online courses, and open educational resources that can be modified for locally effective use: a range of professional development opportunities to support public librarians’ work to develop and facilitate local digital history programs in their communities. As a member of the project team, my work focused on developing content for an online module, participating in site visits to local public libraries, and collaborating with the current project manager in preparation for my taking on this role in the fall.

In our first team meeting of the semester, I volunteered to develop the content for the Analyzing Primary Sources module. I believed this would be an easy enough task, given my past experience in the classroom teaching students how to use primary sources; yet my experience only took me so far. Over the course of the semester I needed to consider (and reconsider) both content and audience. For example, in the classroom, I might teach middle- or high-school students how to analyze a letter, journal entry, or political cartoon as primary sources. The holdings of a local public library, however, are much broader and I needed to include additional sources such as historical films, numerical data (such as tax assessments), and oral histories in the module—this forced me to slow down and research the nature of these unique sources and determine how best to articulate the steps of analysis. This determination was based as much on the steps of analysis as it was on the audience they would be articulated to—both public libraries and patrons who may use the modules and associated open educational resources.

Our team also needed to understand the needs of our audience, so we researched public libraries in Maryland and Virginia with local history holdings and conducted site visits with library staff. While we had a set of questions we were looking to answer, library staff did as well, making our site visits moments of shared inquiry where we could begin to determine new questions and issues, with the goal of also identifying mutually satisfying answers and solutions. I also spent time throughout the semester talking through project logistics with the current project manager; since I have been interested in project management in a public history context, I was excited to have the opportunity to shadow her for the semester, and examine what processes and tools are in place to manage our work. We’re also collaborating on the work plan for the design of the online course.

Creating content, researching for site visits, and learning project management have all been important and useful experiences for me this semester. However, as a trained public historian, I would remiss if I didn’t discuss, even briefly, the way in which collaboration influenced my work. As I developed the primary source modules, receiving questions and feedback from the team helped me to sharpen my writing and approach. Researching, planning, and participating in site visits assisted in finetuning our team’s operating assumptions and determining the path forward. Learning project management in a collaborative space allowed me to consider our everyday project work in a more strategic way as I move into a new role. While I am definitely looking forward to the summer, I am also looking forward to the new challenges and opportunities that await me in the fall semester.

Fall 2017 in the Research Division

For my second year of the fellowship, I am working in the Research Division, mainly on Digital Humanities Now and PressForward. Fall semester was a busy one for PressForward—we released PressForward 5.0, and we have been preparing an overhaul of the website. Helping out with all of that work has made this an exciting time to be in the division.

One of my main duties has been serving as Site Manager of Digital Humanities Now. I format the posts selected by the Editor-in-Chief, manage email, and run DHNow’s Twitter account. This year, we wanted to do a better job of reminding people to sign up to be editors-at-large, to use the bookmarklet, to submit feeds, etc., so I’ve been coming up with creative tweets, using more hashtags, and trying to come up with other ways to drive participation. Because I had never used Twitter until I started using it for DHNow, it’s been a great opportunity to learn more about using social media to build engagement. I was also tasked with writing the end-of-the-year blog post, which was another lesson in writing for the DHNow community.

Choosing content for DHNow continues to be one of my favorite things about the fellowship. Over the summer, I had the chance to be full-time site manager and Editor-in-Chief (see this post on my personal blog for more on that experience). This semester, I served a few rotations as Editor-in-Chief. During one of my weeks, I got to work with Amanda Regan and Joshua Catalano to do something a little out of the ordinary. The response on Twitter to a piece in the Chronicle of Higher Education, titled “The Digital-Humanities Bust,” was overwhelming, but there hadn’t been any blog-length posts like those we usually post on DHNow. The shift toward Twitter and away from the blog format has sparked a number of discussions about how DHNow can meet the conversations where they’re happening, and this felt like the right time to try to capture that. We experimented with a few different methods, but you can see the final result here.

My other big project for the semester was to move the PressForward documentation to GitBook and bring it up to date. Although GitBook makes it easy to format everything with their editing tools, I ended up needing to use Markdown to do everything I wanted with the layout and formatting. I’ve used Markdown a few times before, but it was nice to have another refresher and to finally feel proficient. I spent a lot of time improving the documentation, too—clarifying language, adding useful instructions, making everything consistent. Even though I didn’t have to create the documentation from scratch, I wanted to make sure it was as helpful as possible, so it was a great chance to get a sense of what drafting original documentation would entail. It also allowed me to gain an even deeper understanding of PressForward and all of its functions. I think the completed documentation looks great, and it’s certainly an improvement over the old, outdated documentation.

The final important task of the semester was helping out with testing to make sure that all of the new functions in PressForward 5.0 were working and that none of the basic features were breaking. Testing really intimidated me at first. I needed to install PressForward using the command line, I needed to be methodical and document everything, and I needed to use GitHub to read, comment on, and create issues. All of these things made me feel like I was going to mess something up, but once I got the hang of things, I realized there was nothing to fear. I’m a very detail-oriented person, and I love the pseudo-detective work that goes into figuring out that a function works when you do things this way but not when you do things that way. A lot of problems came up during testing, including some that I found. It was exciting to play a role in spotting those issues, and it was even more exciting when the developer fixed them. There were a few times when we thought we might not get PressForward 5.0 out before the end of the semester, but with all the time and hard work that the team put into it, we managed to release it just in time. You can download the new version here.

There was a lot of other work that went into getting the PressForward website ready for a relaunch. Because that’s not out yet, I’d rather wait to share all of that when it’s actually visible to the world. With the new website, more testing for future releases, and other tasks that need to be completed before PressForward’s grant ends, the spring semester is bound to be as interesting and instructive as the fall.

Third Rotation: Research Division

The Research division works to create open-source tools and software to assist historians in researching and presenting their own findings, as well as engaging with the scholarship of others. During our four-week rotation in this division, we learned about and experimented with PressForward, primarily through working with DH Now, and RRCNHM’s newest tool, Tropy.

First, we took a closer look at Digital Humanities Now, a website that acts as a distribution platform for digital humanities based scholarly work in various formats from blog posts to white papers, as well as conference and job announcements from the field. Coming into the Research Division, we were somewhat familiar with DH Now as we had been using it all semester to stay current on what was happening in the DH field. However, I was less clear on the actual underpinning of how DH Now worked. I knew that multiple feeds were brought into the WordPress site and from there, DH Now staff and GRAs curated those feeds, choosing what to publish on the website.

In fact, as we learned, DH Now is powered by PressForward, a WordPress plugin created by the Research Division at RRCHNM that allows one to easily aggregate and share content from the web. DH Now is only one of multiple websites that use the plugin to aggregate and share scholarly web content.

We then were tasked with using our new knowledge of PressForward and DH Now to serve as the Editors-in-Chief for a week, reading through nominated content, choosing what would be the best material to publish, and even selecting the Editor’s Choice piece. Through this hands-on experience, we were able to get a sense of how DH Now worked from the editors’ perspective, instead of from more of a consumer’s view.

Finally, we worked with Laura Crossley, one of the Editors-in-Chief of DH Now, to install and use the PressForward plugin on our own scholarly websites. Laura uses PressForward, much like DH Now, to aggregate DH content on her own blog and share her own comments about what is happening in the field. After considering several options, I decided to use PressForward on my personal website in a less extensive, but still quite useful way. Up until this semester, my personal website has contained blog post updates about my coursework and progress through the Graduate Certificate in Digital Public Humanities. Now I have also begun publishing these posts on the RRCHNM Fellows Blog. It is likely that in the near future, I will also publish blog posts or other scholarly web content in places other than my personal website. Therefore, I am excited to be using PressForward on my own website to bring together a collection of my work in one location. This will allow me to do a better job of keeping track of my own work, while also offering a place for others to examine the various ways I have engaged in the scholarly conversation.

Secondly, we spent time experimenting with and learning about Tropy, the newest research tool built by RRCHNM. Tropy had been released just prior to the beginning of our rotation in the Research Division, so we were some of the first people outside of the Tropy team to really get to see what Tropy can do. Like many of the projects carried out in the Research Division that try to solve a current problem troubling historians, Tropy gives a solution for what to do with the thousands of pictures that scholars take during trips to the archives. We were challenged to experiment with Tropy by reading the documentation, downloading the software, importing some of our own research materials and finally, creating a metadata template for that material.

As a historian of Early America, dealing mostly with handwritten documents, I found that Tropy is extremely useful. It allows you to easily import images and group them into documents, which is helpful if you have a multi-page document and a separate image for each page. Next, there is special split screen view (document on top, space to type underneath), which allows you to transcribe the documents right in Tropy. Before Tropy, I had been keeping my transcriptions (as Word documents) and image files (in a photo editor) separately, but Tropy allows you to save them together, which is really helpful. Tropy also has a search feature, so that you can find every occurrence of a certain person or place’s name in the documents you have transcribed, instead of manually reading through text files to find what you are looking for.

A big part of Tropy, and the second part of our task, was to deal with the metadata associated with our documents. Tropy comes preloaded with a basic template and a few more specific ones (ex. for correspondence or photos). The templates differ in the metadata that they ask you to add for each item. For example, the generic one asks for information like title, date, item type, source, holding collection, etc. while the correspondence template asks for the title, author, recipient, date, location, archive it is from, etc. The metadata properties have to come from an established vocabulary (ex. Dublin Core) but users can import other vocabularies available through Linked Open Vocabularies (LOV). Users can also download templates that others have made or upload templates they had made.

For my experiment in creating a Tropy template, I used material from a previous research project about a rape case in Fairfax County, Virginia during the Civil War. While my research had been supplemented by government and prison records held at the Library of Virginia and newspaper records at the Fairfax County Library, the core of my research centered around the case file for the trial, which is held at the Fairfax Circuit Court Historic Records Center, in Fairfax, VA.

Originally, I thought that I might make a template that could be used for all material at the Historic Records Center, as that would be helpful to a larger group of people if uploaded and shared. My thinking was that I could add the Historical Records Center as the Default Value for the Source property, and that way, users would not have to type that in for every item. But as I tried to create a generic template that would encapsulate all the types of sources held at the Historic Records Center (wills, deed books, birth, death and marriage records, road petitions, as well as court documents), my template ended up being no different than the “Tropy Generic” template that comes with the software.

So I decided to take a different approach and focus on making a template just for the Term Papers at the Historic Records Center. This class of documents provides the legal  judgments for each case, and includes any court papers filed during the term of court. As I knew from my research, item properties like “creator” were less important than determining things like the plaintiff, defendant, and case type (murder, debt, trespass, assault, etc.). After a long search through the properties and vocabularies that came with Tropy, I found that I could use the Dublin Core property “Subject” to stand in for the case type, but that there were no good properties already in Tropy to describe the Plaintiff and Defendant. Therefore, I used Linked Open Vocabularies to find a vocabulary through the Library of Congress (MARC Code List for Relators) that included these two properties. Next, I downloaded the vocabulary and imported it into Tropy, where I could add the two properties to my template. Here is a sample of my template:

Swain FXC Term Papers 1Swain FXC Term Papers 2

Overall, I enjoyed my time in the Research Division. It was really interesting to see how the team in this division had identified two problems common to the historical profession–namely, how to get more publicity/recognition for scholarly grey material and how to organize images of documents from archival research–and how they worked to create usable solutions to these problems. I also appreciated the chance to see how these tools could be incorporated into my own scholarly work and blog.

Public Projects Update

I spent the Fall 2017 semester in the Public Projects Division. Since the end of the Spring 2017 semester, as well as over the summer, I have been primarily working with the Hearing the Americas team to complete an NEH planning grant. This digital project will explore the history of the early music industry by recontextualizing digitized recordings from the LOC Jukebox, UCSB Cylinder Archives, and the Internet Archive’s Great 78 Project. Working on this project has been an excellent opportunity to connect my interests in music history and digital public history. I first conducted content research, reading through secondary sources on the history of the early recording industry and locating primary sources that can complement the digitized recordings. Drawing from this research, I created some sample content that reflects the kinds of information and pathways that the site will provide. This sample content included Music Trivia questions, which will give users in depth explorations of important artists, songs, or themes, as well as sample Omeka item pages that include artists, songs, and genres. In addition to textual sources, I also helped to compile a sample set of visual primary sources including advertisements and catalogs that will be included as content as well as guide the aesthetic design.

I then worked with Megan Brett to create user personas as part of the website design process. First, we identified a number of potential audiences for the website, including Music Fans, Musicians, and Music Writers/Record Collectors. From here, we developed a questionnaire to gauge the kinds of knowledge and expectations people might bring to the website. We first completed in-person interviews with potential users based on a shortened and open-ended version of the questionnaire. Finding people to interview at concerts and other music-based events proved difficult, but the conversations we collected helped us to form the longer-format Google Forum and provided useful feedback to shape the personas. The online survey utilized Google Forum’s option to create various pathways, which we used to separate questions for musicians from other persona types, and asked a series of questions about the user’s knowledge of music history, possible interests, and online behavior. We planned an outreach strategy to disseminate the survey on various social media platforms, utilizing hashtags like #MusicMonday and #MusicHistory to reach the widest possible audience. We initially expected a sample size of about 25 responses, and were pleased to greatly exceed that number, reaching 75 responses by the end of the first day and finally closing the forum at nearly 100 responses. All of the information we gathered formed the basis for writing five user personas including composite biographical paragraphs and bullet points that outline potential user behavior.

We sent this sample content and user personas to our designer, Kim Nguyen, who prepared wireframes and mood boards that reflected the potential information architecture and aesthetics of the website. Using Kim’s wireframes as a guide, I sketched out the potential pages of the website with sharpies and paper in order to do a round of paper prototyping. I represented each page of the the website on an individual sheet of paper, allowing for as many foreseeable pathways as possible. I then did prototype testing with people representative of three user personas: Music Fan, Musician, and Music Writer. In this format, the tester “clicks” through the website by pointing to the various options drawn on the page, and I would then switch to the page they selected to simulate a potential pathway. While working through these pathways, the tester also provided feedback about the organization of the website, their expectations about what they would hope to find on each page, and questions about parts of the site that seemed confusing or counter-intuitive. This allowed us to not only test out the information architecture developed over the last year, but provided some very useful feedback from people who had more distance from the project and were able to view the prototypes with fresh eyes.

I completed this semester by writing up user experience narratives that drew from the user personas and the paper prototyping. These narratives described scenarios in which people might find and engage with the website, highlighting content like Music Trivia, annotated recordings with musicological comments, and explorations of important artists and genres. All of this work will be included in the final collaboratively-written design document that will be submitted to the NEH in the Spring 2018 semester. Working on this project has given me insight into the process of designing a large-scale digital history project, as well writing and completing grants. Helping to write the user personas and user experiences for the design document was by far the most challenging part of this semester, but it has given me valuable experience in a style of writing not often included in graduate education.

Understanding Tools: Working in the Research Division

In my research, I tend to focus on structures and institutions and their real-life implications for everyday people. In my everyday digital life, I have just come to a place where I am more comfortable with examining the structure of digital tools. This was my starting point for my rotation in the Research Division. At RRCHNM, the Research Division creates open-source tools to promote both individual historical research and the development of collaborative digital communities.

We began the rotation with an overview of PressForward, a software plug-in which allows users to aggregate and share digital content using WordPress. Once installing the plug-in, content can be collected via both a feed reader and bookmarklet. Users can discuss, nominate and share items (including an attribution link and metadata) within WordPress. When I learned that you can also keep track of notes and discussions, I wished I had known about PressForward when I was teaching; it would have been great to have this tool to allow students to review and discuss media related to our curriculum. However, PressForward is more than a cool plug-in; it is a tool by which scholarship outside of the typical journal article or dissertation can be widely distributed. Digital Humanities Now (DH Now) is an example of how PressForward is put into practice.

DH Now is an experimental, edited publication that highlights scholarship in the digital humanities that drives the field forward. Additional items of interest, such as jobs, CFPs, conferences, funding announcements, reports and other resources are also posted–again, the point here is encourage scholars to share via the open web, and to amplify work and resources that might not get the attention they deserve. Potential content for DH Now is aggregated in multiple ways, whether its via RSS from a list of subscribed feeds, Twitter threads, or other sources. The content is reviewed, nominated, and discussed  in WordPress using PressForward by volunteer Editors-at-Large. Rotating Editors-in-Chief (faculty and graduate students here at the Center) select content for publication. Acting as Editor-in-Chief gives graduate students the opportunity to examine both content and practice in the digital humanities; it also provides us with experience in crowd-sourcing a DH project.

As a DH Fellow, I’ve been a volunteer Editor-at-Large all semester; that practice combined with a new and deeper understanding of PressForward prepared us for our first task: serving as Co-Editors-in-Chief. Together, we reviewed all of the nominated content (which came by way of the feeds as well as from Bookmarklet), discussed the pros and cons of each piece, and decided on what we’d publish on DH Now. Our Editor’s Choice piece was the white paper “Digital History and Argument,” a product of the Arguing with Digital History Workshop  held here at Mason in September. We also published announcements for conferences and Zotero workshops, two job postings, and new grant guidelines for the DH Advancement Grant from the NEH.

I also spent some time working with Tropy, a newly released tool for organizing research photographs. Users can organize and annotate their photos, as well as export them to share and collaborate with others. I downloaded Tropy to my laptop and used a folder of images from my M.A. research (which may become part of my dissertation) to experiment with it. I was able to combine images (photographs of multiple-page documents) to create consolidated items that were easier to view. Tropy also allowed me to easily add metadata to my photos, such as archive and collection information. Once we had some experience with the software, we were tasked with creating metadata templates. I designed several templates using the following questions as guides:

With the photos I already have, how might have wanted to organize them differently when I was in the archive last year? For the photos I had, I created a template for that archive, with fields for Collection, Box, Folder, and Subject. Box and Subject were the most important fields for that template, as I was examining multiple events, each with extensive incoming and outgoing correspondence.

What other repositories could I access as I develop my dissertation topic? This question led me to some quick research on a nagging question I’ve had for a few weeks…and behold, I got an unexpected answer that will help me expand my focus for my dissertation! I located three additional repositories that could be helpful in the near future.

If I have access to, say, a collection finding aid, could I develop specific templates for these different repositories? One of the repositories I located had a finding aid (albeit not as detailed as that of the other archive I visited last fall), and I was able to use that to help me develop a template; for the other two repositories, I created a template that was a combination of the first two. In general, creating the templates were easy (and in the instance of furthering my dissertation topic, extremely helpful!)

I appreciated having the opportunity to work in the Research Division, to experiment with tools that I might have previously overlooked, and to examine ways in which these tools might be refined to serve the needs to even broader audiences.

Second Rotation: Public Projects Division

The Public Projects Division creates tools, projects and collections that encourage greater interaction with history among a popular audience. Some of the division’s projects are geared directly for public engagement while other tools help public history professionals more easily create collections, exhibits and projects of their own. During our four-week rotation in this division, we worked primarily on two projects, Omeka S and Mapping Early American Elections.

One of the most well-known and in-demand tools that RRCHNM has created, Omeka, comes out of the Public Projects Division. Omeka was released in 2008 as a web content publishing platform that would allow for the assembly, management, and exhibition of digital collections. Omeka S, the Public Project’s newest addition, builds on the popularity of Omeka Classic. Omeka S allows users to create and manage multiple Omeka sites on a single install. It also boasts new modules (plugins) for mapping and importing collections from other systems. Additionally, it allows users to share resources and collections among their multiple sites, and assign distinct privileges to different levels of users.

When we began our work with Omeka S, it was in its final phase of testing, but as of yesterday, Omeka S: 1.0 has officially been released. To start out, we worked with Megan Brett, the Omeka End User Outreach and Testing Coordinator. She taught us how to work with with GitHub and secure shell (SSH) via the command line to install themes and plugins on an Omeka install. Then we worked to simultaneously review the existing Omeka S documentation while testing the instructions on the dev site. We were asked to proofread, not only for spelling and grammar errors, but more importantly, for readability and usability. Did the directions make sense? Were there enough screenshots to help the user follow along with the text? Were the screenshots current? Did they display what a user would really see on his or her screen? Did the dev site respond in the ways that the documentation suggested that it should?

This process of reading and testing gave me firsthand experience with using Omeka S and provided me a more profound sense of the tool’s capabilities. It has enabled me to confidently describe Omeka S to others and explain how it differs from Omeka Classic. Finally, it has encouraged me to explore how I can use the new features of Omeka S in my own work.

During the second half of our rotation, we worked on the Mapping Early American Elections projects. As an Early Americanist, I was excited to work on a project in my favorite era. Although I normally focus on women, gender and social history in this period, looking at the early elections was really fascinating. At the time we (briefly) joined the project, the project team had already created a data set based on the information collected in A New Nation Votes (NNV). They were in the process of creating maps from that data set to represent each Congress in each state in order to help visualize the votes based on political parties.

In addition, they were adding brief interpretive text to each map to explain how each state’s election system worked and to call attention to any interesting aspects of the elections or trends from the previous election. To get a taste of this work, we were asked to write the interpretive text for all the states during the first three Congressional elections. Writing this text required us to look at each visualization (map), compare it to the chart devised from the data set, compare it to the data tables, footnotes and research notes provided by NNV, and then complete additional research for some of the more complicated elections. After we finished writing our interpretive text, Dr. Lincoln Mullen taught us how to use markdown and GitHub to add some of our text to the dev site for the project.

As a student of history, I really enjoyed the historical inquiry and analysis associated with this assignment, as well as the larger questions that the work forced us to discuss and try to answer. First of all, it reminded me how much I like the investigative and interpretive work of history–trying to sort through many different pieces of evidence in order to form one’s best (informed) guess or interpretation of what happened in the past. The more I found out about each election, the more digging I wanted to do.

Secondly, the work forced me to ask bigger questions like, what does it mean to be elected? In our original instructions, we were asked to mention in the text how many candidates from each political party were elected. While this at first sounded straightforward, we soon found out that it proved more difficult. For example, what about elections where one candidate received the most votes, but then the election was contested, votes were later ruled invalid, and the results were officially modified? What if a candidate received the most votes but died before he could take office or he declined to serve? Is there a difference between who was elected and who served in Congress? These and similar questions were discussed during the project meetings before settling on a more precise definition for the project.

Most of all, this project showed how me how digital history projects can make an argument and contribute to the historiographical conversation. Dr. Rosemarie Zagarri, the Lead Historian on the project, writes in the project’s blog in a post called “What Did Democracy Look Like? Voting in Early America” that “Early American elections subvert conventional notions that portray the development of early American democracy as an orderly or systematic affair.” Doing the research required to write the interpretive text really drove home this argument. Early American elections were, in fact, really messy. After the Constitution was ratified, elections didn’t just automatically happen in an organized and efficient manner that was consistent from state to state. As Zagarri asserts, it was an era of experimentation.

By looking at the voting practices and results for several different states during the same election, it was easy to see how the election systems varied state by state. For example in the First Congress, Delaware’s election law required voters in each of the state’s three counties to submit names of two persons they wished to elect. Of these two persons, one was required to be an inhabitant of the voter’s own county and the other needed to be from a different country. The person who received the most votes overall (at-large) would win the election. In the First Congressional election in New York, on the other hand, the state was divided into six districts and voters in each district elected one candidate to represent their own district.

The experimentation of the era, even within an individual state, was also evident by looking at change over time in a single state during the first three Congresses. A great example of this is Pennsylvania. For the First Congress, Pennsylvania held an at-large election where voters were allowed to vote for eight different candidates who could reside anywhere in the state. For the Second Congress, Pennsylvania created eight districts, and only allowed voters to elect one candidate who had to reside within their own district. For the Third Congress, Pennsylvania’s number of congressional seats increased from eight to thirteen (following the results of the 1790 Census) and consequently, the state discontinued its use of the district system, and instead switched back to an at-large system like they had used for the first congressional election. Examples like these provide strong evidence that supports the project’s historiographical argument.

Overall, I enjoyed the mix of technical and more traditional (research and analysis) aspects of working in the Public Projects Division. Even though I am leaving this division, it will be interesting to track both of these projects as they progress; I will be curious to see how users respond to Omeka S in its first few weeks post-launch, and to discover what findings come out of the Mapping Early American Elections project.

Public History and Digital Tools: Working in the Public Projects Division

As a public history student at University of Maryland Baltimore County, I was part of the discussions that led to the creation of the Preserve the Baltimore Uprising 2015 Archive Project. Over the last two years, I have been interested in the ways public history projects–particularly community archives of contemporary events–are created by way of digital tools. As such, I was eager to begin my rotation in the Public Projects division.

The division works to develop digital tools, collections, and exhibits that facilitate public participation in history. Popular tools for online exhibitions and community transcription such as Omeka and Scripto are developed and maintained in the division. Several digital community archive projects are also housed in the division, such as the September 11 Digital Archive, a Digital Memory Bank commemorating Hurricanes Katrina and Rita, and the Bracero History Archive.

My first task was familiarizing myself with GitHub, a version control repository used primarily for code. Next, I worked through Omeka S as its version 1.0 release drew near. To clarify, Omeka S is different from Omeka Classic: it has a different code base, and gives individuals and organizations the ability to create multiple sites from a single install with shared resources; these resources can be shared as linked open data. I reviewed  documentation (which also introduced me to Markdown via MkDocs), and tested the tool by completing a variety of tasks.

The remaining time in Public Projects was focused on working on Mapping Early American Elections, a project which produces interactive maps and visualizations of elections from 1787 to 1825. I reviewed the election maps, visualizations, and election data from New Nation Votes to draft explanatory paragraphs for state elections for the first three Congresses (1788-1792). My historical research is firmly situated in the late nineteenth century, yet I thoroughly enjoyed examining eighteenth century elections, tracking electoral trends. The last day of my rotation, I was able to utilize GitHub and Markdown to pull my paragraphs into the dev site.

While my introductory rotation is over, I am looking forward to returning to the division next semester and further developing my skills as a digital public historian.