Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Uncovering censorship in the 16th century with Transkribus and Python. Episode I: OCR with Latin prints

by Markus Müller with the collaboration of Melina Ramirez

Introduction

Since the 1980s, our knowledge on 16th century censorship has been growing constantly thanks to coordinated research projects and new sources. In over ten years of research, for instance, José Maria Bujanda and his team edited all printed catalogues of forbidden books issued by Catholic authorities in the 16th century (eleven volumes until 1999). This made it very easy to check whether a certain author or book was banned or not. With the opening of the Archive of the Congregation of Faith (the former Roman Inquisition) in 1998 a vast number of manuscripts became available that provided an insight into the internals of the Roman Congregation of the Index (founded in 1572) and the decision making processes of the censors. These processes are particularly interesting for books that were not totally banned but expurged, meaning that the censors deleted or modified passages considered “heretical” and then reprinted the book. In many cases, the only way to find out what they actually changed is to compare the original with the expurged edition line by line and word by word. Because Rome was by far not the only Catholic authority that expurged books, the same book was sometimes also expurged by the Spanish Inquisition, the Sorbonne in Paris and other local inquisitors. They all had their own censoring procedures resulting in different expurgations. If we want to uncover the differences between these expurgations, we have to repeat the tedious manual comparison over and over again. These repetitive comparisons are exactly the kind of “boring stuff” that could be automated with Python. Before we start comparing, though, the 16th century texts have to be digitized. This is not straightforward because we have to deal with a poor typeface, hundreds of abbreviations, and other peculiarities.


This miniseries describes a digital workflow that starts with raw images of 16th century Latin prints, extracts the text, normalizes it, corrects transcription errors, and finally compares two texts to uncover differences, i.e. censorship. These are the episodes:

  1. OCR with 16th century prints. Transkribus offers a (relatively) simple way to convert images of text into machine-readable text. Based on Deep Learning, it can be trained to recognize any text, even handwriting. This episode describes some typical pitfalls when feeding 16th century prints to Transkribus, proposes solutions, and draws a resume after processing more than 700 pages.
  2. Let Python speak to Transkribus. Transkribus has an Application Programming Interface (API) to upload and download data. This is a very useful feature to further process the recognized text with Python and dealing with some of the pitfalls described in episode 1. This episode shows how to access the Transkribus REST-API using Python.
  3. Normalizing 16th century raw text. Early modern printers did not follow strict orthographical rules and used hundreds of abbreviations to save paper and ink. If two different printer workshops printed the same text, the outcome varied significantly on the character level. This episode presents a Python script that normalizes orthographical variants and resolves abbreviations in order to make the texts comparable.
  4. Detecting OCR transcription errors. If well trained, Transkribus can achieve very good results. Nevertheless, a few transcription errors will remain, and sometimes, the 16th century printers had a bad day and introduced errors as well. A Latin spellchecker built with Python can help to find these errors automatically.
  5. How did the censors actually change the text? After digitizing the texts, correcting the remaining errors, normalizing the orthographical variants, and resolving the abbreviations, we can at last compare the original and the expurged version word by word using a Python script. As a result, we can see how the censors modified the original text during the expurgation.
  6. Finding text re-use In the 16th century, it was a common practise to re-use texts from other authors without reference. A combination of Python and other software tools can help to find similarities between texts providing an insight into the practise of early modern text production.

Episode I: OCR with 16th century prints

OCR and artificial neural networks

Optical Character Recognition (OCR) is a very useful tool for historians. Thanks to the rise of artificial neural networks in computation, OCR has improved so much in recent years that it can tackle even very complex problems like Handwritten Text Recognition (HTR). So getting a computer to read Latin books printed in the 16th century should be easy, shouldn’t it? — Well, yes and no.

The most important thing to consider is that neural networks are not “intelligent” as such. They can only do the things we want them to do if they are trained with thousands of examples. Computer Science teaches why this is the case, how artificial neural networks are built, and what “training” actually means. For now, it is enough to imagine a very stupid student (the neural network) and a very stupid teacher (the computer program that trains the neural network). Suppose the student should learn the meaning of 100 Chinese characters but the teacher would not tell the student what to do, nor how to do it. Instead, the teacher just presents one of the Chinese characters and lets the student guess. Then the teacher compares the student’s answer with the solution in the teacher’s manual and either praises or punishes the student. These steps are repeated until the student’s error rate is acceptable, let’s say 0.3%.

Let’s switch back to reality. There are various software solutions that make use of artificial neural networks for OCR, including big projects like OCR-D, eScriptorium and Transkribus, or smaller ones like Kraken and many more. I decided to use Transkribus because it not only provides a full-fledged desktop app and web platform with many useful features for every step of the transcription workflow, but also a server infrastructure on which the entire heavyweight training and recognition process takes place. Moreover, it is ready to be used by teams or even for crowd research. The only downside is that due to the end of public funding by the European Union in 2019, power users (who want to recognize 500+ pages) have to pay a small fee per page (pricing model).

Given the fact that you don’t have to buy or rent your own hardware, this seems to be a fair deal in my opinion. Since there are plenty of good tutorials on how to use the Transkribus client software (including webinars) as well as on the technical backgrounds, this article will focus on possible pitfalls and the peculiarities of dealing with early modern printed Latin texts.

Getting good quality images

The starting point of OCR with neural networks is the same as in our teacher/student example: You need training data, that is, first of all, images of the text to be recognized. Thanks to the mass digitization of whole libraries during the last decades, it is (in most cases) very easy to get good quality images of 16th century prints. In my experience, “good quality” means colour images (alternatively greyscale) with at least 2,500 pixels on the longer edge. For older OCR software, pure black and white images were the way to go, but OCR software based on neural networks gives better results with colour gradients, especially in the first step of the OCR workflow, layout analysis (more on that later). Brown and stained paper or warped pages are not a problem as long as the neural network is trained to deal with messy images. More problematic are damages caused by bookworms or ink corrosion. Rule of thumb: If you can read it, Transkribus will be able to read it as well.

OCR based on neural networks can handle curved lines as well as brown and stained paper, but bookworm holes are a problem.

Before manually downloading hundreds of images one by one, you should consider some more efficient strategies: Transkribus can import either a PDF file or a folder containing images (PNG or JPG). If the library does not provide PDFs, try to find your book on Google Books. Many libraries, including the Bavarian State Library in Munich, cooperate with Google when they digitize their holdings. Unfortunately, Google Books converts colour images to pure black and white when you press the “download pdf” button. However, you can still download colour images from Google Books using Python (see my books.py script on Github). Similar approaches can be used for other image sources, although it can be quite challenging to work out the complexities of web scraping. At the end of the day, you have to weigh the pain of manually downloading images against the hassle of writing Python code. There are specialized Python packages for web scraping as well as very good tutorials (e.g. Monika Barget’s tutorial on web-scraping).

Layout analysis

After uploading the images to Transkribus, the next step is “segmentation” or “layout analysis”, i.e. Transkribus identifies the different elements on the page (heading, footer, marginals, paragraphs, etc.) and the lines of text within these elements. Having processed more than 500 pages, my conclusion is that this step cannot be automated completely. In most cases, you will have to use the “segmentation mode” in Transkribus to manually correct the results of the automated segmentation.

There are two built-in tools for automated segmentation in Transkribus: Since the standard tool (“CITlab Advanced”) was often unable to correctly distinguish between text body and marginalia, I decided to use the second tool called “P2PaLA” (“Page to PAGE Layout Analysis”) which is based on neural networks and can thus be trained. To generate ground truth for P2PaLA, my student assistant Melina and I manually marked the “TextRegions” on about 70 pages using the “segmentation mode” in Transkribus. We then trained a P2PaLA model, which we used to analyse further pages. By repeating the cycle of recognition, manual correction and training several times, the P2PaLA model is now trained with about 500 pages and delivers very good results for the page layouts it has been trained for.


Pro tips:

  • When running the layout analysis with P2PaLA, you will be asked for a “Min area” in the dialogue box. I got the best results with a value of 0.7 in combination with checking the “rectify regions” box.
  • TextRegions can be tagged according to their role on the page, e.g. “Header”, “Paragraph”, “Marginalia”, “Page Number”, etc. These tags are extremely useful for post-processing and I strongly recommend using them. This seems very tedious at the beginning but there are two efficient helpers: First, you can define shortcuts to assign a tag to a TextRegion. Secondly, the P2PaLA models are able to automatically tag the recognized TextRegions depending on their position on the page. This virtually eliminates the problem of tagging the TextRegions once your P2PaLA model is well trained.

    Tagged TextRegions in segmentation mode.
  • When correcting the TextRegions, make sure they do not overlap, as P2PaLA does not like overlapping TextRegions. This is especially important if the text body and the marginalia are printed very close next to each other.
  • If you deal with two-column and single-column layouts, train a separate P2PaLA model for each of them. This gives better results than mixing the two.
  • Name your trained models consistently, e.g. “MyProject 2-col 2021-05-30” and “MyProject 1-col 2021-05-30” etc.

In contrast to the TextRegions, in my case the recognition of lines often worked better with the standard layout analysis tool CITlab Advanced. Therefore, I combined both approaches: P2PaLA recognizes the TextRegions, CITlab Advanced goes for the lines. In this case, make sure to uncheck “Find Text Regions” in CITlab Advanced before clicking “Run”. Otherwise, the already recognized TextRegions will be overwritten. Like the TextRegions, the generated “BaseLines” must also be checked manually. Usually, the results are quite good, but sometimes the BaseLines are interrupted without good reason or the line endings and beginnings do not match the actual text.

Bottom line: The manual drawing of the TextRegions and the correction of the BaseLines initially took up to ten minutes per page. After several cycles of training and manual correction, P2PaLA achieved significantly better results. A model trained with about 500 pages can reduce the time for manual post-processing to a maximum (!) of 30 seconds per page.

Optical Character Recognition

Before starting the actual transcription, the first thing you should do is check the list of public models. It is possible that someone else has already published a model that meets the requirements of your particular case. The Noscemus project for example regularly publishes its model which can recognize prints with Latin and Greek characters from the 15th to the 18th century. Unfortunately, when I started my project in 2018, the model still had some issues. Since then, its character error rate has dropped to 0.91%, i.e. on average 0.91% of the characters on a page are wrong. On an octavo page with, let’s say, 4,352 characters, about 39 of them would be wrong. There is yet another problem: The Noscemus model tries to resolve the abbreviations in the text, but sometimes fails, so quite a lot of manual correction is necessary to achieve the quality needed for a word-for-word comparison.

Therefore, instead of using the Noscemus, I decided to take a two-step approach:

  1. Train a model that produces a diplomatic transcription – i.e. the text is transcribed as accurately as possible, including all special characters.
  2. Write a Python script that resolves the abbreviations (more on this in episode 3 of this series).
Typical abbreviations in 16th century Latin prints.

Melina and I produced training data (or “ground truth”) by manually transcribing about 70 pages. This was the toughest part because it takes up to 30 or 45 minutes per page in the beginning. The most important thing in this phase was communication and coordination within our small two-person team of transcribers. We had to disambiguate the broad variety of sometimes odd-looking characters and decide which Unicode characters should represent them in the transcription. In this context, even small details can be crucial for the performance of the trained model. And there are many pitfalls:

Abbreviation or not?
  • Sometimes it is difficult to decide whether you deal with a new special character/abbreviation, a damaged printing type, dirt on the page, etc.
  • Special characters usually represent abbreviations, so a consistent transcription is essential even if the type face varies between different books. The famous “Lexicon abbreviaturarum” by Adriano Cappelli (now available as a database, thanks to a crowd sourcing project)
    Variations of “etc.”.

    is indispensable to identify these abbreviations and to understand their meaning. A Unicode table with graphical search helps to choose a suitable representation.

  • Special attention is needed for characters that could be represented by either a combination of two Unicode characters or one single character: To write “ā” you could either use Unicode U+0101 = “ā” or a combination of a normal letter “a” and a combining macron “◌̄” (U+0304). Both options are valid, but you should stick with only one of them for consistency.
  • Melina had the idea to collect and discuss all upcoming problems in a shared text file online, where we also documented all the special characters and their correct transcription in a table. This kind of thorough documentation is extremely helpful to prevent losing track.
  • We also proofread each other’s transcriptions to reduce the error rate. A shared spreadsheet served as a to-do list. Nevertheless, even four eyes cannot spot all errors, especially in the case of misspelled words with visually almost identical letters, such as “ſ” and “f” (“Chriſtus” vs. “Chriftus”) or “l” and “I” (“Iona” vs. “lona”). Be very thorough here: any transcription error will scale up because a neural network trained with wrong examples will reproduce the same errors when it is applied to new images.
  • Since thorough proofreading is extremely tedious and time-consuming (about 20 min per page), you can build a Latin spellchecker with Python to speed things up (see episode 4 of this series). This can reduce the time needed for proofreading to about 5 min per page. The quality of the trained model also benefits from this approach: The character error rate dropped to 0.35% (trained with 974 pages, 51,107 lines).

Bottom line

Even though neural networks can automate OCR of early modern prints, the very time-consuming manual transcription and manual layout analysis are still necessary during the training phase. After a good model has been trained, proofreading still takes time and is error-prone. Nevertheless, it is possible to speed up the whole process significantly by combining Transkribus with Python (which we will do in the next episodes of this series). The following table gives an idea of the time required, based on experience with transcribing more than 700 pages. Thanks to Python as an ‘afterburner’ and under ideal circumstances, 10 or even 15 pages per hour are possible.

manually automated (with …)
layout analysis 10 min 0 min (Transkribus)
layout correction ½–1 min still ½–1 min
transcription 30–45 min 0 min (Transkribus)
proofreading 20 min 5 min (Python)

Featured Image: Let Python Talk to Transkribus, 2021, by Markus Müller, CC BY-SA 4.0,
build upon Wolpertinger by Rainer Zenz, CC BY-SA 3.0; Python natalensis, in: Andrew Smith: Illustrations of the zoology of South Africa, Reptilia. Smith, Elder, and Co., London 1840, PD; Markus Müller: Sammelband mit Schriften Johann Wilds, Wissenschaftliche Stadtbibliothek Mainz, Signatur 559 q 1;  The ‘haunted table’ in the bar of the Black Horse Inn public house, on Nuthurst Street in Nuthurst, West Sussex, England, by Acabashi, CC BY-SA 4.0.


OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Markus Müller (2. Juli 2021). Uncovering censorship in the 16th century with Transkribus and Python. Episode I: OCR with Latin prints. DH Lab. Abgerufen am 24. Januar 2025 von https://doi.org/10.58079/nl5w


Autor: Markus Müller

Studied Catholic theology and education science in Tübingen (2000–2006) and Salamanca (2002/2003). 2012: PhD in Catholic theology (ecclesiastical history) in Tübingen. Research assistant in Tübingen, assistant lecturer in Frankfurt and Mainz. Since 2018, postdoctoral researcher at the Institute of European History in Mainz. – Research interests: history of religious education, censorship in the 16th century, Digital Humanities.

3 Gedanken zu „Uncovering censorship in the 16th century with Transkribus and Python. Episode I: OCR with Latin prints“

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.