Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

What You See Is What You … Have to Revise Afterwards

von Martin Prell

Daten und Zielstellung

Im Juli 2024 hatte ich dank eines FAIR-Data-Stipendiums von NFDI4Memory am DH Lab des IEG die Gelegenheit, ein Datenset zu transformieren, das wenige Jahre zuvor am Lehrstuhl für Geschlechtergeschichte der Universität Jena entstanden war: Das in der staatlichen Bücher- und Kupferstichsammlung Greiz befindliche handschriftliche Reisetagebuch zur Kavalierstour des pietistischen Grafen und späteren Fürsten Heinrich XI. Reuß-Greiz (1722–1800) in den Jahren 1740 bis 1742 war damals durch die Thüringer Landes- und Universitätsbibliothek digitalisiert und mit Hilfe der virtuellen Forschungsumgebung FuD transkribiert und annotiert worden. FuD beinhaltet unter anderem einen sogenannten WYSIWYM-XML-Editor; ein Akronym für „What you see is what you mean“, da die Eingabe der Daten in einer graphischen Oberfläche stattfindet, ohne dabei zwingend mit Auszeichnungskodierung in Berührung zu kommen. Die avisierte Präsentation des Tagebuchs im Editionenportal Thüringen war zum Ende der Erschließung nicht mehr möglich. Ziel des FAIR Data-Stipendiums war es daher, die FuD-XML-Forschungsdaten so zu transformieren und zu publizieren, dass die Edition bestmöglich auffindbar, langfristig zugänglich, interoperabel und nachnutzbar wird. „What You See Is What You … Have to Revise Afterwards“ weiterlesen

Competing demands: on combining different activities during my Postdoc

A small project called Europäische Friedensverträge der Vormoderne in Daten (FriVer+) ran at the DH Lab in 2023. The main aim of this project was to transform the data that had been gathered during an earlier project, Europäische Friedensverträge der Vormoderne online, and that was stored in an SQL-database, into XML and make it publicly available. All of this was done in accordance with the FAIR principles. The various activities this project comprised are already explained in another blog on the Text+-blog as well as in the project documentation. Hence I won’t reiterate all this information here. Instead, I would like to offer a short reflection on the advantages and challenges of combining a project like FriVer+ with my role as Postdoc at IEG Mainz.

„Competing demands: on combining different activities during my Postdoc“ weiterlesen

Integrating library data into an authority file: The challenges of MARC XML and inconsistent transcription practices

a guest post by Till Grallert

A recent Twitter post from my former colleague Anne Klammt made me aware of a recent relaunch of “Zeitschriftendatenbank” (ZDB), the portal for periodical holdings in German (and Austrian) libraries.

Part of the German National Library (Deutsche Nationalbibliothek, DNB), the website looks great and provides a lot of data-driven functionality, such as maps and timelines of holdings. The display language of the website itself, though not the bibliographic data, can be toggled between German and English. This is a welcome nod to international users and will certainly increase the visibility of this important portal. However, it must be noted that unfortunately the dataset of bibliographic data is not as accessible as the interface. Languages written in scripts other than Latin are provided in a variety of inconsistent transcriptions into Latin script for mostly historical technical reasons. This is not the fault of ZDB per se but it will prevent communities from the Global South from finding and accessing their own cultural heritage, which for various reasons are held by institutions in the Global North. This is especially relevant for Arabic material, as I will elaborate in the section on transliterations below.
„Integrating library data into an authority file: The challenges of MARC XML and inconsistent transcription practices“ weiterlesen

Geohumanities III: analysing early modern mobility through birth and apprenticeship letters

By Monika Barget

In the winter term 2020/2021, Jaap Geraerts and I worked with students in the Mainz MA programme “Digitale Methoden in den Geistes- und Kulturwissenschaften” (“Digital Methods in the Humanities and Cultural Studies”) to create a digital edition of early modern birth and apprenticeship letters. The edition includes records in French and Latin as well as German and highlights people’s cross-border mobility in the seventeenth and eighteenth centuries. „Geohumanities III: analysing early modern mobility through birth and apprenticeship letters“ weiterlesen

Text zu XML mit Python auf Basis des „Bomber’s Baedeker“

von Felix Bach und Cristian Secco

Die Transformation von digitalisierten Druckwerken von einer Bilddatei zur maschinenlesbaren XML-Datei ist für zahlreiche Methoden der Digital Humanities ein wichtiger Schritt in der Datenaufbereitung. In diesem Beitrag präsentieren wir einen Ansatz auf Basis eines Python-Skripts am Beispiel eines Werkes mit einer besonderen Binnenstruktur: Der Bomber’s Baedeker war ein „Reiseführer“, welcher von der Royal Air Force genutzt wurde, um während des 2. Weltkrieges deutsche Industriestandorte anzugreifen. „Text zu XML mit Python auf Basis des „Bomber’s Baedeker““ weiterlesen

Doing Digital History with Python I: reading (messy) XML & JSON data

by Monika Barget

During our DH brownbag lunches at the IEG, colleagues have repeatedly asked us if we could recommend Python packages for digital history. We have therefore set up a list of packages we at the IEG DH Lab are using for the analysis of text (stored, for instance, in XML/TEI or JSON formats), the modelling of historical networks, or the creation of interactive maps.

The list Python for digital history is based on our personal experiences and, though by no means exhaustive, may serve as an appetizer for “Doing Digital History with Python”. In a series of blog posts, we will try and introduce you to some of the packages mentioned through case studies from current IEG research.

Today’s post covers the extraction of data from XML and JSON files with xml.etree.ElementTree, lxml, json(5) and beautifulsoup(4) as reading structured text is often a starting point of digital history projects. „Doing Digital History with Python I: reading (messy) XML & JSON data“ weiterlesen