Visiting a museum as a kid, I sometimes wondered why I could hardly read anything in the medieval or early modern books and manuscripts displayed in the exhibition. Even after I learned Latin in school, the situation did not improve. I was not aware that people in former centuries used a lot more abbreviations than today, especially in Latin texts. As long as paper (or parchment) was very expensive, scribes and printers tried to save as much space as possible. Therefore, a sentence like “In principio fecit deus caelum et terram” could be abbreviated as “In prīcipio fecit de⁹ celū ⁊ ťrā” (“In the beginning, God created the heavens and earth” — the first verse of the Bible). You may have noted that these abbreviations worked differently than abbreviations like “WWW” or “U.S.” that we are familiar with today, and it would be nice if they could be resolved automatically with a Python script.
This miniseries describes a digital workflow that starts with raw images of 16th century Latin prints, extracts the text, normalizes it, corrects transcription errors, and finally compares two texts to uncover differences, i.e. censorship. These are the episodes:
- OCR with 16th century prints.
- Let Python speak to Transkribus.
- Normalizing 16th century raw text.
- Detecting OCR transcription errors.
- How did the censors actually change the text?
- Finding text re-use.
Just like today, words could be represented by their initial letters (like “a.d.” for “anno domini”, “in the year of the Lord”). Some abbreviations of this type are still in use, such as “etc.” for “et caetera” (“and so on”). In those days, however, “etc.” could also be written as “et caet.”, “& caet.”, “&c.”, or even “⁊c.”. The last example, by the way, starts with a so-called “tironian et” (“⁊”) which was already used in ancient times as part of a complex system of shorthand invented by a Roman scribe called Tiro, the so-called “Tironian notes”. While the “⁊” almost completely disappeared, the ampersand “&” made it to our keyboards. In the example from above, the same idea of replacing a syllable by a mark with an independent meaning is used in “de⁹” where “⁹” stands for “us”, so “de⁹” means “deus” (“God”). And since the syllable “ter” could be replaced by “ť”, “terra” (“earth”) could be abbreviated as “ťra”.
In some cases, our medieval and early modern friends even had to know some Greek to get the point: “Jesus” was abbreviated as “IHS” or “ihs”/“jhs”, using the first three letters of the name written in Greek: “ΙΗΣΟΥΣ” (the Greek sigma “Σ” was transliterated into a Latin “S”). The same could be done with “Christ” = “Christos” = “ΧΡΙΣΤΟΣ” = “XP” or “xp”. If you visit an old church, you may find these abbreviations on reliefs, sculptures or paintings, sometimes contracted to “☧”; or take a look at the coat of arms of pope Francis I.
As you can see, scribes (and printers) were quite creative at that time. Things get even worse when an abbreviation can be resolved in various ways, as it is the case with the straight bar above a vowel, called “macron”. There are three macrons in the example from above: “prīcipio” = “principio” (“in the beginning”), “celū” = “celum”, or in classical spelling “caelum” (“heaven”), and “ťrā” = “terram” (“earth”). In the first case, the macron replaces an “n”; in the other two, an “m”. You need to know your vocabulary and grammar to make the right choice between “n” and “m”.
These examples are only a very small part of a large and complex system of abbreviations which was in use back then. To read heavily abbreviated texts fluently, you need a certain palaeographic expertise and a lot of training. Apart from a growing number of tutorials1, the best resource to get started is the “Lexicon abbreviaturarum”, a dictionary of thousands of abbreviations compiled by an Italian archivist in Parma, Adriano Cappelli (1859–1942, no English Wikipedia article yet!). The first three editions in Italian are available online (e.g. the third edition) as is a German translation. Cappelli’s introduction will help you to understand different types of abbreviations and to solve most of the palaeographic problems with medieval and early modern Latin texts. Thanks to a crowd-sourcing initiative in 2015, the dictionary has been converted into a searchable database.
Resolving abbreviations with Python and Regular Expressions
The abbreviations themselves were more or less standardized, but their use was not. Thus, one and the same sentence could be written very differently. Compare the following passages taken from two different editions of the same text:
A human reader who knows how to resolve abbreviations would consider the two snippets to be identical. A computer, however, would compare the text letter by letter and find many differences, not only on the level of abbreviations (“Prędicamus/Prædicamus”, “Christū/Christum”, “&c̄./&c.”), but also regarding the usage of interchangeable characters (“uerum/verum”, “egreſsi/egreſſi” — “ſ” is just another form of writing “s”). And there is another problem: hyphenation. If there is a hyphen at the end of a line (like in “ſtulti- | tiam”), it would be easy for a computer to join the two parts. But often the hyphen is missing (as in “Præ | dicamus”), and a computer could easily misinterpret the two parts as two different words.
If we really want to find censorship automatically (as promised in episode I), we have to find a way to overcome all these problems. One approach would be to normalize both texts in a way that their normalized versions are identical. For example, “Prędicamus Christū” and “Prædicamus Christum” should both be transformed into “Praedicamus Christum”. That means, the computer should search for problematic characters like “ę”, “æ”, or “ū” and normalize them.
However, a simple search and replace operation will fail, because there are cases where problematic characters should be replaced only if they occur in a certain position within a word or in a certain context. Example: Sometimes the word “cum” (“with”) is written as “quum”. If “quum” were always replaced with “cum”, other words such as “antiquum” (“old”) would also be changed to “anticum”, which is obviously wrong. To prevent the script from doing this, we need a way to say, “Please, replace ‘quum’ with ‘cum’ only when ‘quum’ is a word by itself.” So it’s all about pattern matching. We should be able to define a search pattern like “quum as a word by itself”, or, more formally, [word boundary]
quum
[word boundary]
.
REGULAR EXPRESSIONS
In the 1960s, a formal language called “regular expressions” (“regex”) has been developed to solve this kind of pattern matching problems. A regular expression is a sequence of characters that describes a search pattern. These patterns consist of normal characters (like “A”, “x” or “3”) and “metacharacters” which have a special meaning. For example, .
in a regular expression matches “any character”. If you are looking for an actual full stop, you have to search for \.
. The metacharacter \s
matches “any whitespace character”, \w
means “any word character”, and \d
stands for “a digit”. Searching for \d\d\.\d\d\.\d\d\d\d
would match things like “14.11.1483” or “15.06.1520”, but not “14 November 1483” or “1483-11-14”. To learn more about all the metacharacters available and the logic behind regular expressions, check out a good tutorial and play around with the regex101 web app, which provides a live preview of the matches along with extensive help.
Let’s use regex to solve the quum/cum problem: The metacharacter \b
matches a “word boundary”, so \bquum\b
matches “quum as a word by itself”. This is exactly what we need. Give this solution a test on regex101. In Python, the built-in re
module enables us to use regular expressions in our code. Note that the \bquum\b
pattern must be a raw string, i.e. we have to put an “r” before the first quote (r"\bquum\b"
), otherwise the \b
metacharacter will not work. (Read the docs of the re
module to understand why!)
import re # Provide the lines as a list: lines = ["Prędicamus Christū", "Prædicamus Christum", "antiquum quum equum aequum"] # Provide the regex patterns and the corresponding replacements # as a list of lists: replacement_table = [["æ", "ae"], ["ę", "ae"], ["ū", "um"], # Make sure to use a raw string (r"PATTERN") here: [r"\bquum\b", "cum"]] # Loop through the lines: for line in lines: print("raw: ", line) # Loop through the items in the replacement_table # to apply all the predefined search and replace operations: for pattern, replacement in replacement_table: line = re.sub(pattern, replacement, line, flags=re.IGNORECASE) print("cooked:", line, "\n")
Listing 1: simple_regex.py
on Github.
The simple_regex.py
script works fine, but it would be better to outsource the replacement table to an actual table document. It is much easier to modify the table using LibreOffice or the like than changing the Python code directly. We should also try to handle the raw string problem more elegantly and improve some other details. Last but not least, the script should be wrapped as a function (.resolve_abbreviations()
) as part of a class — let’s call it Cleaner
— to make it more (re-)usable (like we did with the Transkribus client in the last episode). You can find the improved script cleaner.py
on Github along with an extensive replacement table that has grown over the last two years and is still work in progress.
The need for a Latin dictionary
The .replace_abbreviations()
function of the Cleaner
class can search for patterns and replace them, but it cannot resolve the macrons properly. While the hard-coded rule in simple_regex.py
correctly resolves “ū” in “Christū” as “um”, it would produce an error in “nūc” which should be “nunc” and not “numc”. The easiest way to overcome this problem would be to look up both possibile solutions in a dictionary, i.e. a list of all Latin words including all inflected forms. Then we could verify “nunc” and “Christum” as the correct forms and discard “numc” and “Christun”. Digital Latin dictionaries normally include only the base forms of the words (like Karl E. Georges: Ausführliches lateinisch-deutsches Handwörterbuch), but some can handle inflected forms very well (like Navigium, William Whitaker’s Words2 or a Latin Hunspell dictionary by Karl Zeiler and Jean-Pierre Sutto).
Theoretically, we could use on of these online dictionaries for our purpose. Though, sending an HTTP request needs a lot of time (about 700 milliseconds for Whitaker’s Words and more than one second for Navigium), and we would run into problems sending hundreds of requests to the dictionary’s server. Fortunately, you can download Whitaker’s Words3 as a command line program to speed things up. However, you need to write some code (using Python’s subprocess
module) to make use of the command line program in your Python script. With this approach, it takes about four seconds to check 50 words. This is pretty slow, although the results are quite good because Whitaker’s Words knows many proper names and can handle medieval spelling. More efficient would be to integrate a Latin Hunspell dictionary using the cyhunspell
package, which takes only two milliseconds for 50 words, but the results are worse because Hunspell does not know proper names like “Andreas” and complains about non-classical spelling (“columpna” instead of “columna”, or “tercium” for “tertium”). Nevertheless, we will use the Hunspell solution for now.
Compared to Whitaker’s Words, using cyhunspell
is relatively simple: First, install cyhunspell
via pypi
as described in the documentation. Then, download the Latin Hunspell dictionary by Karl Zeiler and Jean-Pierre Sutto, not as one .oxt
file as provided by the authors, but as two single files (an .aff
and a .dic
file) which you can find on Github. Unzip the downloaded .zip
file in your project folder, tell your Python script about the path to the dictionary folder, and you’re ready to go:
from hunspell import Hunspell # Initialize the dictionary object. hunspell_data_dir is the # directory of the two dictionary files "la_LA.aff" and "la_LA.dic": h = Hunspell('la_LA', hunspell_data_dir='hunspell-la') # Check a list of words: words = ["amamus", "credo", "asdf", "wrong!", "spes"] for word in words: print(word, h.spell(word))
Listing 2: simple_hunspell.py
on Github.
Once again, it is more convenient to wrap all the logic in a class named Dictionary
that initializes the Hunspell
object and provides a .check_word()
function which returns True
if the word exists. See dictionary.py
on Github.
Resolving macrons
Using the Dictionary()
class, we can build a function which resolves macrons properly: First, it has to find all the macrons in a word plus the corresponding vocals. Then, it has to build a list of all possible solutions. Thirdly, the dictionary checks whether the words in the list exist or not. If so, it returns the word as the solution. Otherwise, it replaces the macron with a dot “●” so that we can easily spot unresolved macrons in the text. See the .replace_macrons()
function of the Cleaner
class.
Example: “Christū”
- Find macrons and vocals: macron: “ū”, vocal: “u”.
- List of possible solutions:
[”Christun”, ”Christum”]
. - The dictionary says: “Christun” =
False
, “Christum” =True
. - Return “Christum”.
Tokenizing the text
Since the dictionary can only process one word at a time, we have to feed it with single words. That means, before we call the .replace_macrons()
function, we have to split each line of text into individual words. The process of splitting a stream of data into separate parts (or “tokens”) is called “tokenization”.
Again, regular expressions can help us to solve this problem. In Python, the built-in re
module provides a re.split()
function that does most of the job in one line: re.split("([\s.,;:?\-=()\]])", text)
splits the text at each whitespace (\s
) and each punctuation (.,;:?-=()]
, note that “-” and “]” must be preceded by a backslash so they are not interpreted as metacharacters). Some post-processing decides whether the token is a normal word or punctuation. The .tokenize()
function returns a list containing the tokenized text, which can be fed to the .resolve_macrons()
function. In the end, we will get each line of text as a list of word objects (Python dictionaries) containing the normalized words and the information whether it is a normal word or punctuation.
Resolve hyphens and line breaks
There remains one last problem: hyphenation and line breaks. As we have seen above, there is not always a hyphen when a word is split at the end of a line.
- Even when there is a hyphen, resolving line breaks is surprisingly complicated: Thanks to the hyphen we do not need to consult a dictionary before joining the two tokens, but we cannot resolve the macron if we consider only the first element. In “cō- | scientia”, “cō” only makes sense in combination with “scientia”, so we should run the
.replace_macrons()
function on the combined string “cōscientia” after having resolved hyphenations and line breaks. Joining the tokens means that the hyphen after “cō” is dropped and that “scientia” moves from the beginning of the second line to the end of the first line. - If the first line ends with punctuation other than “-” or “=”, the tokens do not need to be joined, but we have to resolve the macrons for both tokens separately.
- The same is applies if the second line begins with a capital letter.
- If the hyphen is missing we try to join both tokens and verify the word in the dictionary. If the word does not exist, we proceed as in 2).
- If it does exist, we move the second token to the end of the first line as in example 1). In case the second token in the second line is punctuation, the second token must also be moved to the end of the first line. Moving tokens to the first line sometimes results in an empty second line. In this case, we should flag the second line as “empty”.
This complicated logic is implemented in two functions of the Cleaner
class: .resolve_linebreaks()
decides whether to join the two tokens or not, .join_words()
combines the words if necessary and normalizes them including the macrons.
The normalization pipeline
We now have all the building blocks for a complete normalization pipeline. First, we download the raw text using the Transkribus_Web
client developed in episode II. Then, the Cleaner
class does its job: .replace_abbreviations()
uses the replacement table to normalize most of the abbreviations. The third step is tokenization using a regular expression. The tokenized lines are ready for resolving line breaks in the fourth step. Here and when we finally resolve the macrons, we need to consult a Latin dictionary.
A simple command line interface
All the functions created in this episode are wrapped in classes (Cleaner
, Dictionary
, IO_Tools
) which live in different .py
files (cleaner.py
, dictionary.py
, tools.py
). Splitting the code into different files makes it more maintainable and helps to keep the overview. The classes in these files can be imported as modules, e.g. by saying from cleaner import Cleaner
, meaning “use the Cleaner
class from cleaner.py
in this script”. The only thing missing is a main function from where we can operate the whole machine.
For now, a very simple command line interface should do the job: The Transkribus_CLI
class in cli.py
organizes the communication between the user and the various parts of the software. cli.login()
initializes the Transkribus_Web
client (written in episode I) and logs the user in to the Transkribus server. cli.choose_collection()
lets the user choose a collection from a list of all available collections. Using the same logic, the user can then to select a document and a page. Finally, cli.print_page()
downloads the raw text (the diplomatic transcription) from Transkribus, feeds it to the normalization pipeline, and prints it to the screen. After that, we say good bye with cli.logout()
.
from cli import Transkribus_CLI cli = Transkribus_CLI() cli.login() colId = cli.choose_collection() docId = cli.choose_document(colId) pageNr = cli.choose_page(colId, docId) page = cli.get_page(colId, docId, pageNr) cli.print_page(page, raw_text=False) cli.logout()
Listing 4: The Transkribus_CLI
class provides a rudimentary command line interface. Using this CLI, the normalizer.py
script selects collections, documents and pages on the Transkribus server, downloads the raw text from Transkribus, normalizes it, and prints it to the screen.
This rudimentary user interface is fine for demonstration and testing purposes, but it is too clumsy for working efficiently. Therefore, we will give it a thorough overhaul in the next episode and build a user-friendly web interface. In addition, we will make use of the Latin dictionary to detect OCR transcription errors and correct them semi-automatically.
Footnotes
1 The Ad fontes learning program of the University of Zurich offers a very nice introduction to palaeography and exercises; Wikipedia provides some basic information; Transkribus has its own training section; for the peculiarities of English palaeography see the National Archives.
2 Cf. extensive documentation on Whitaker’s Words: http://archives.nd.edu/words.htm.
3 Find the source code of Whitaker’s Words on Github here or here. There is also a port from Ada code to Python.
Featured Image: Let Python Talk to Transkribus, 2021, by Markus Müller, CC BY-SA 4.0,
build upon Wolpertinger by Rainer Zenz, CC BY-SA 3.0; Python natalensis, in: Andrew Smith: Illustrations of the zoology of South Africa, Reptilia. Smith, Elder, and Co., London 1840, PD; Markus Müller: Sammelband mit Schriften Johann Wilds, Wissenschaftliche Stadtbibliothek Mainz, Signatur 559 q 1; The ‘haunted table’ in the bar of the Black Horse Inn public house, on Nuthurst Street in Nuthurst, West Sussex, England, by Acabashi, CC BY-SA 4.0.
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Markus Müller (3. September 2021). Uncovering censorship in the 16th century with Transkribus and Python. Episode III: Normalizing 16th century raw text. Digital Humanities Lab. Abgerufen am 13. September 2024 von https://doi.org/10.58079/nl5z
3 Gedanken zu „Uncovering censorship in the 16th century with Transkribus and Python. Episode III: Normalizing 16th century raw text“