Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Doing Digital History with Python I: reading (messy) XML & JSON data

Python molurus

by Monika Barget

During our DH brownbag lunches at the IEG, colleagues have repeatedly asked us if we could recommend Python packages for digital history. We have therefore set up a list of packages we at the IEG DH Lab are using for the analysis of text (stored, for instance, in XML/TEI or JSON formats), the modelling of historical networks, or the creation of interactive maps.

The list Python for digital history is based on our personal experiences and, though by no means exhaustive, may serve as an appetizer for “Doing Digital History with Python”. In a series of blog posts, we will try and introduce you to some of the packages mentioned through case studies from current IEG research.

Today’s post covers the extraction of data from XML and JSON files with xml.etree.ElementTree, lxml, json(5) and beautifulsoup(4) as reading structured text is often a starting point of digital history projects. In many cases, encoded texts available to historians do not come from a single source and are not necessarily well-structured. XML or JSON files are provided by different archives or even created in the context of public humanities projects that involve volunteers in the transcription of historical sources. As a result, XML or JSON files may not always parse as expected. And this, in turn, is making it impossible to directly ingest these files into DH tools such as the DARIAH-DE topics explorer for topic modelling.

One example of faulty XML and JSON files emerging from a crowd-sourced project is the Letters 1916-1923 project in Ireland, a collection of over 6000 items mainly transcribed by people with little historical and / or technical expertise. As a former member of the project team continuing to publish on the collection, my aim was to use an XML data dump exported prior to the re-design of the Letters 1916-1923 database in 2017 and a JSON data dump from the new backend (exported in 2019) for comparative topic modelling and an analysis of female authorship.

Both data dumps contained many files that would not parse. One reason were frequent UTF-8 encoding errors in Irish-Gaelic words, another was the highly complex and fault-prone nesting structure introduced to handle transcriptions of each page of a letter individually in the frontend. In order to extract the content of these letters, it was therefore necessary to apply exception handling for the UTF-8 issues as well as inconsistencies in the file hierarchies.

My first attempt to extract the XML data was to use xml.etree.ElementTree, an ElementTree API, which works very well for reading and building neatly structured XML files. However, this library and lxml, though easy to use, could not solve my particular data problem. I needed to resort to beautifulsoup, which was built for HTML and XML, tolerating a less rigid hierarchy of tags. With beautifulsoup, I was able to extract selected information from individual files as well as create sub-corpora of files sharing certain characteristics.

#!/usr/bin/env python
# coding: utf-8
from bs4 import BeautifulSoup
import os
from os.path import dirname, join
directory=("C:\\Users\\mbarg\\Documents\\corpus") # location of XML files on local drive
results=[] # create result list
for infile in os.listdir(directory):
    filename=join(directory, infile)
    indata=open(filename,"r", encoding="utf-8", errors="ignore") # UTF-8 encoding errors are ignored
    contents = indata.read()
    soup = BeautifulSoup(contents,'xml')
    titles = soup.find_all('title') # get item titles
    for title in titles:
        print(title.get_text())
        results.append(title.get_text())
print(results) # result list is shown on screen

ReadXML_titles_withBeautifulSoup.py [github]

#!/usr/bin/env python
# coding: utf-8 

from bs4 import BeautifulSoup
import os
from os.path import join
import shutil

directory=("C:\\Users\\mbarg\\Documents\\corpus") # directory containing all XML files

results=[] # new result list
item=("Female") # item searched for in XML
for infile in os.listdir(directory):
    filename=join(directory, infile)
    indata=open(filename,"r", encoding="utf-8", errors="ignore") 
    contents = indata.read()
    soup = BeautifulSoup(contents,'xml')
    keywords = soup.find_all('keywords') # accessing XML tag where gender info is stored
    for keyword in keywords:
        items=str(keyword.get_text()) 
        if item in items: # find "female" in keywords
            results.append(infile) # add name of XML file to result list
            shutil.copy2(join(directory, infile), 'C:\\Users\\mbarg\\Documents\\corpus_women') # copy file to subcorpus 
        else:
            continue
print(results) # print list of file names in subcorpus
print(len(results)) # print number of files in subcorpus

Find letters by female authors_XMLwithBeautifulSoup.py [github]

Reading my equally messy JSON files was also very challenging and required some work-around [script].


#!/usr/bin/env python
# coding: utf-8 

import os
import os.path
from os.path import dirname, join
import json
import re

in_directory=("C:\\Users\\mbarg\\Google Drive\\ACADEMIA\\FeministDH for Susan\\6_JSON files Letters 1916-1923")
out_directory=("C:\\Users\\mbarg\\Google Drive\\ACADEMIA\\FeministDH for Susan\\7_Transcriptions from JSON files\\\JSON_April27")

items=(os.listdir(in_directory))
for i in items:
    filename=os.path.join(in_directory, i)   
    indata=open(filename,"r", encoding="utf-8", errors="ignore")     
    jsondata = json.loads(indata.read())
    print(type(jsondata))
    print(len(jsondata))
    
    for n in range(len(jsondata)):
        try:
            outdata = (jsondata[n])
            if outdata:
                letter_content1=outdata.get('element')
                if letter_content1:
                    letter_content2=json.loads(letter_content1)
                    letter_content3=letter_content2.get('pages')
                    for r in range(len(letter_content3)):
                        letter_content4=letter_content3[r]['transcription']
           
                        letter_ID=str(i+str(n)+"."+str(r))  
            
                        if letter_content4:
                            print(letter_ID, ":", letter_content4)
                            cleanr = re.compile('<.*?>')
                            cleantext = re.sub(cleanr, ' ', letter_content4)
                            #print(cleantext)
            
                            out_file_name=os.path.join(out_directory, str(letter_ID)+".txt")     
                            out_file=open(out_file_name, "w", encoding="utf-8")
            
                            toFile=str(cleantext)
                            out_file.write(toFile)
                            out_file.close()
                            continue
                        else:
                            continue
                else:
                    continue
        except KeyError:
            print("Not found.")
            continue

Letters_content_fromJSON.py [github]


The transcriptions (including XML tags) which I was interested in where hidden in a sequence of “pages” within the main “elements” of the JSON hierarchy. Each JSON file contained transcriptions of 500 letters with up to ten pages each, and as the data objects in the JSON files included dataframes as well as dictionaries and lists, an iterative excavation of the various data layers was necessary. I used Python’s standard json library to get to the XML-encoded transcriptions and then applied the cleanr and re.compile functions to transform these transcriptions into plain text.

The .txt files I created could then be used for multiple data analyses, including topic modelling. Although working with my messy data in Python was not always straightforward, I would not have been able to process the data at all without the flexibility of writing my own Python scripts.

PS: For technical information, installation advice and further resources, consult the README in my Digital History repository.



Featured Image: Python molurus, Print, between 1700 and 1880, 27,07 cm x 21,44 cm, Iconographia Zoologica, Special Collections of the University of Amsterdam, Wikimedia Commons.


OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Monika Barget (30. April 2020). Doing Digital History with Python I: reading (messy) XML & JSON data. DH Lab. Abgerufen am 7. Februar 2025 von https://doi.org/10.58079/nl58


Autor: Monika Barget

Studium der Geschichte, Kunstgeschichte und Katholischen Theologie in Augsburg (Magistra Artium). 2009–2013 Berufstätigkeit in den Bereichen Content Management, Lektorat, Kundenservice und Redaktion. 2013–2017 Promotionsstudium im Exzellenzcluster »Kulturelle Grundlagen von Integration« an der Universität Konstanz. 2017–2018 wissenschaftliches Projektmanagement im Centre for Digital Humanities an der Universität Maynooth (Irland). Seit Januar 2019 wissenschaftliche Mitarbeiterin im IEG (Digital Humanities Lab/Digitale Historische Forschung).

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.