Dealing with uncertainty and capturing the underrepresented

by Jaap Geraerts

Since I started my project on the schism in the Catholic Church in the eighteenth-century Dutch Republic in the summer of 2019, I have been creating a dataset that comprises the information contained in lists of baptisms, burials, and marriages. This information enables me to trace the movement of Catholics to another, competing Catholic Church in the context of the schism. Consider, for example, Henricus Verbruggen and Maria Blomevelt. They baptised their first two children in a mission station that was part of the Church of Utrecht but had their third and last child baptised in the Roman Catholic Church (see Fig. 1).

Fig. 1.

As can be gleaned from the image above, I express this data in a graph database, which enables me to capture the various relationships between people and their roles at the events in which they participated. Moreover, a graph database allows for a great deal of flexibility. Recently I encountered a fascinating list of Catholics who had “converted” from the Church of Utrecht to the Roman Catholic Church. The list contains extremely valuable information and required me to include a new edge (=relationship), namely ‘converted_at’, and a new node (=event), conversion (see Fig. 2).

Fig. 2.

Often, these lists are relatively easy to work with, safe from abysmal handwriting or a paucity of information due to the lack of interest (or time) of the serving priest. However, a more frequent and persistent problem is the uncertainty about whether person A in event B was the same person in event C. Countless spelling variants and the common occurrence of particular names render it sometimes near impossible to tell whether we are dealing with the same person or not. In case of great uncertainty, I generally decided to refrain from making a decision. Sometimes, however, in the case of less uncertainty, I did treat these people as if they were actually one and the same. Luckily, it is possible to capture such uncertainty – I have done so in the Excel spreadsheet, which functions as a temporary staging database (from which I import the data into my graph database). It is not a problem to include this data in the graph database, but when visualizing the data, as done above, it becomes much trickier to account for this uncertainty. For example, one could capture the uncertainty as an attribute of an edge. However, only one attribute can be visualised, so either someone’s role at an event (e.g. ‘godparent_at’) can be shown or the attribute that denotes the uncertainty. Hence when dealing with images, the uncertainty in the data easily slips to the background, creating the misleading idea that all the data is of equal certainty.

When presenting this and related problems pertaining to data visualizations with several colleagues from the IEG DH Lab at the recent conference ‘Digital History: Konzepte, Methoden und Kritiken’ (see the video presentation by Monika Barget), a conference attendant, Moritz Feichtinger, if I remember correctly, asked the following intriguing question:

‘How do you consider it operationalisable to identify the unknown/unrecorded, the “unrepresented,” so to speak, in order to prevent statistical distortions or a problematic claim of “total” recording of the past in data? Does this correspond to the “humanistic approach”? I think that the globally and socially unequal (digital) representation of the past should also be included in analyses in the form of a clearly identified "blind spot” or “fuzziness”1

Virtually all the data from the early modern period (or, perhaps better phrased, data based on early modern sources) is incomplete and biased. This incompleteness can be circumstantial, through the accidental loss of sources, or deliberate, through targeted destruction. Biases can result from the mindset, preferences, and outlook of the people and institutions who created the sources. Phrased differently, many sources reflect the hierarchies and power relations of the early modern period, causing things to be underrepresented or portrayed in a negative light. For example, many Catholic priests only mentioned the mother’s name when they baptized an illegitimate child, possibly because the name of the father was unknown to them or because they sought to protect the father’s honour (and hence only recorded his first name in some instances).

Arguably, a one-size-fits-all approach to this vexing issue does not exist. Rather, one must assess whether the information derived from one body of source material can be enriched by information stemming from other (related) sources. This will not always be possible. Moreover, we need to reflect on the implications of our editorial interventions. For instance, I could try to find the names of the fathers of illegitimate children or, to give another example, infer that a couple had married (even if I cannot find any record thereof) because their children were not called illegitimate in the baptismal registers. Doing this, however, would greatly increase the uncertainty regarding some data in my dataset and would only augment the problems described earlier. Hence, my approach to this issue when working with this specific data would be to provide a lengthy introduction about the primary source material as well as my editorial policy and decisions. In addition, I aim to capture all the relevant information where possible (e.g., signifying when a child was deemed illegitimate) but refrain from inferring information and including it in the dataset. In the end, as the questioner already indicated, instead of glossing over gaps, inconsistencies, and biases, pointing at and signalling them is the best way to account for the fact that both the sources and the dataset created by me are the creation of fallible human beings and are a flawed and incomplete representation at best.

Cite this article as: Jaap Geraerts, "Dealing with uncertainty and capturing the underrepresented," in Digital Humanities Lab, 16/04/2021, https://dhlab.hypotheses.org/1952.
  1. The original question in German: „Wie hielten Sie es für operationalisierbar, Unbekanntes/Unerfassstes, gewissermaßen das ‚Nicht-repräsentierte‘ auszuweisen, etwa um statistische Verzerrungen zu verhindern oder auch ein problematischen Anspruch ‚totaler‘ Erfassung der Vergangenheit in Daten? Entspricht das dem ‚Humanistic approach‘? Ich denke, auch die global und sozial ungleiche (digitale) Repräsentiertheit von Vergangenheit müsste in Form einer deutlich ausgewiesenen ‚Blindstelle‘ oder ‚Unschärfe‘ in Analysen mit aufgenommen werden.“ []

LinkedArt: exploring network analysis in art history

by Sophia Renz and Vanessa Tissen

The beginning

It all started with the seminar on network analysis in the summer semester of 2020. After learning about the basics of network theory and building networks in Python ourselves, the teachers Aline Deicke and Demival Vasques Filho asked us students to work in groups to develop a project combining our individual humanities backgrounds with network analysis. We are specialists in art history, which we wanted to include in the project. On top of that, the IEG DH Lab provided us with funds and support to further explore the application of network analysis in the field, e.g. whether art history datasets are available and to what extent they are usable or which art historical analyses or topics have already been done. The research project was kept relatively open, so we were able to look at the subject matter first. Tasks and questions developed during the following research. LinkedArt: exploring network analysis in art history weiterlesen

DH fun on the road: 3D models from your photos using photogrammetry

A How-to have fun with DH on the road by generating 3D models from photos using Structure From Motion Photogrammetry

by Sarah Lang

The Digital Humanities have such a huge inventory of digital methods that it is pretty hard to keep up with the learning of new methods while working with others just recently learned. Shifting academic work into free time can be annoying, but learning new things in a fun way – may keep the leisure factor. This is why I wanted to use this blog post to share a fun way making first steps in 3D while beeing on the road – be that a holiday or conference travel. DH fun on the road: 3D models from your photos using photogrammetry weiterlesen

Managementbedarf in Forschungsprojekten der Digital Humanities

Präambel: Dieser Text ist ein kollektives Dokument, das keine zuordenbare Autorschaft hat. Die Entstehungsgeschichte dieses Textes geht zurück auf eine Lehrveranstaltung im Sommersemester 2020 zu Projektmanagement in den Digital Humanities im Masterstudiengang „Digitale Methodik in den Geistes- und Kulturwissenschaften“ an der JGU und Hochschule Mainz. Ausgehend von ausgewählter Literatur und der Leitfrage, welcher Managementbedarf in Forschungsprojekten der Digital Humanities entsteht, haben die Studierenden individuelle Ausarbeitungen angefertigt. Die einzelnen Texte der knapp zwanzig Beiträge wurden dann von den Dozenten auf Ebene der Argumente in ihre Teile zerlegt, zu einem einzigen Text zusammengesetzt und nur geringfügig redigiert, um den Lesefluss zu sichern. Dieser Mash-Up-Text entspricht dem Wesen der Digital Humanities – iterativ, kumulativ, kollaborativ. Managementbedarf in Forschungsprojekten der Digital Humanities weiterlesen