Digitized historical newspapers have been a crucial source for my research over the past decade. Exactly ten years ago, in 2014, I began my research journey with historical newspapers at the University of Innsbruck in Austria. Using a semi-automated multilingual approach to analyze a corpus of over 20,000 articles from German and Italian newspapers, I also started learning how to apply digital methods to historical sources: analyzing large-scale historical newspaper archives simply required such skills. „Large-Scale Research with Historical Newspapers: A Turning Point through Generative AI“ weiterlesen
Schlagwort: Python
Uncovering censorship in the 16th century with Transkribus and Python. Episode VI: Finding text re-use
In 1550, the Catholic cathedral preacher Johann Wild (Ioannes Ferus, 1495–1554) admitted in the preface to his commentary on the Gospel of John that he reused texts of Protestant authors such as Johannes Brenz (1499–1570) and Johannes Oekolampad (1482–1531) in his book, but that he only borrowed thoughts that were compatible with Catholic teaching.1 Unfortunately, in his text, there are no footnotes or other references in his text to the authors he presumably cited. Since 1950, however, various historians have found verbatim parallels or at least significant similarities between Wild’s commentaries and Protestant authors.2 But these findings were more or less accidental. It is still unclear to what extend Wild actually quoted Protestant authors, which authors he used in particular, and so on. The main reason for that is that a manual search for verbatim parallels is very time-consuming, even more time-consuming than searching for censorship. So, is there a way to hack the search for literal quotes in 16th century books?
Uncovering censorship in the 16th century with Transkribus and Python. Episode V: How did the censors actually change the text?
We have come a long way since episode I of this miniseries: After digitizing the texts, normalizing the orthographic variants, and resolving the abbreviations, we used an interactive web app to find and correct remaining transcription errors. Now that the texts are free of mistakes we can finally use them for comparisons. In this episode, we will compare an original text with an expurged reprint to find censorship. Since the censors sometimes manipulated only one or two characters in a word, thereby changing the meaning of the whole sentence, we will compare the texts word by word using the Python module difflib
.
„Uncovering censorship in the 16th century with Transkribus and Python. Episode V: How did the censors actually change the text?“ weiterlesen
Uncovering censorship in the 16th century with Transkribus and Python. Episode IV: Detecting OCR transcription errors
In the last episode, we built a pipeline to convert a diplomatic transcription into normalized Latin text. The code works fine as long as the diplomatic transcription is correct. But what happens if the transcription contains errors or, even worse, if the printer in the 16th century misspelled a word? – Right now, nothing would happen at the moment because our pipeline cannot detect these errors. This is a problem because as soon as we start comparing two editions of the same text to check for censorship (and that’s where we are going!), the slightest difference between the two texts may be interpreted as censorship. Can we solve this problem? – Yes, we can!
Uncovering censorship in the 16th century with Transkribus and Python. Episode III: Normalizing 16th century raw text
Visiting a museum as a kid, I sometimes wondered why I could hardly read anything in the medieval or early modern books and manuscripts displayed in the exhibition. Even after I learned Latin in school, the situation did not improve. I was not aware that people in former centuries used a lot more abbreviations than today, especially in Latin texts. As long as paper (or parchment) was very expensive, scribes and printers tried to save as much space as possible. Therefore, a sentence like “In principio fecit deus caelum et terram” could be abbreviated as “In prīcipio fecit de⁹ celū ⁊ ťrā” (“In the beginning, God created the heavens and earth” — the first verse of the Bible). You may have noted that these abbreviations worked differently than abbreviations like “WWW” or “U.S.” that we are familiar with today, and it would be nice if they could be resolved automatically with a Python script.
Uncovering censorship in the 16th century with Transkribus and Python. Episode II: Let Python speak to Transkribus
In the first episode of this miniseries, I explained how to use OCR with Transkribus to create a diplomatic transcription from images of 16th century Latin prints. However, the resulting text is full of abbreviations and may contain transcription errors. In both cases, Python scripts could help to normalize the text and to detect possible errors.
An indispensable prerequisite for pimping the Transkribus workflow with Python is flawless access to the data stored on the Transkribus server. It would not make much sense to manually download the data, then run some Python scripts on the command line, and finally upload the data again manually, especially when we talk about hundreds of pages to be processed. This episode will show how to download and upload transcriptions from and to the Transkribus server using Python. (Basic to intermediate knowledge of Python is required.)
Doing Digital History with Python IV: web automation
Before digital humanists can do things with data, they first need to collect them, and web automation (or more specific methods of web scraping) can be a quick way of gathering a large amount of data. While web automation denotes every remotely controlled action performed on the web, web scraping, web mining or web harvesting are focussed on reading and processing information (found on websites). This blog post presents useful Python packages for these tasks and explains the advantages of working with browser profiles. „Doing Digital History with Python IV: web automation“ weiterlesen
Exploring connections: Digital workshop on Network Analysis with Python
In times of covid-19, virtual workshops can be quite hard for organizers as well as for participants. On the other hand, they offer a unique opportunity to try unconventional methods to improve the situation for both sides. Dr. Demival Vasques Filho, in cooperation with Anna Aschauer, grasped such an opportunity at the DARIAH-DE-workshop Network Analysis with Python for Beginners, when he decided to simultaneously code and explain the basis on network analysis. Furthermore, he managed to address academics from different research areas with the help of some well-known fictional characters.
„Exploring connections: Digital workshop on Network Analysis with Python“ weiterlesen
LinkedArt: exploring network analysis in art history
by Sophia Renz and Vanessa Tissen
The beginning
It all started with the seminar on network analysis in the summer semester of 2020. After learning about the basics of network theory and building networks in Python ourselves, the teachers Aline Deicke and Demival Vasques Filho asked us students to work in groups to develop a project combining our individual humanities backgrounds with network analysis. We are specialists in art history, which we wanted to include in the project. On top of that, the IEG DH Lab provided us with funds and support to further explore the application of network analysis in the field, e.g. whether art history datasets are available and to what extent they are usable or which art historical analyses or topics have already been done. The research project was kept relatively open, so we were able to look at the subject matter first. Tasks and questions developed during the following research. „LinkedArt: exploring network analysis in art history“ weiterlesen
„Hello, World!“: a Python course for beginners with the Codingschule Düsseldorf
By Alessandro Grazi
My adventure in the world of the Digital Humanities, which started about a year ago in Innsbruck, continued last October and November with a Python course for beginners offered by the Codingschule Düsseldorf.
I did not know what to expect that Autumn Wednesday evening, when at 6 pm I connected to the Zoom link of the Python course I was going to attend. „„Hello, World!“: a Python course for beginners with the Codingschule Düsseldorf“ weiterlesen
Text zu XML mit Python auf Basis des „Bomber’s Baedeker“
von Felix Bach und Cristian Secco
Die Transformation von digitalisierten Druckwerken von einer Bilddatei zur maschinenlesbaren XML-Datei ist für zahlreiche Methoden der Digital Humanities ein wichtiger Schritt in der Datenaufbereitung. In diesem Beitrag präsentieren wir einen Ansatz auf Basis eines Python-Skripts am Beispiel eines Werkes mit einer besonderen Binnenstruktur: Der Bomber’s Baedeker war ein „Reiseführer“, welcher von der Royal Air Force genutzt wurde, um während des 2. Weltkrieges deutsche Industriestandorte anzugreifen. „Text zu XML mit Python auf Basis des „Bomber’s Baedeker““ weiterlesen
Doing Digital History with Python III: topic modelling with Gensim, spaCy, NTLK and SciKit learn
In April 2020, we started a series of case studies to introduce researchers working with historical sources to data analysis and data visualisation with Python. Today’s blog post covers topic modelling with the Python packages Gensim, spaCy, NLTK and SciKit learn.
Topic modelling is one of the central methods of Natural Language Processing (NLP), the “automatic manipulation of natural language, like speech and text, by software.” (Jason Brownlee: What Is Natural Language Processing?, in: Deep Learning for Natural Language Processing, 22nd September 2017) In its most basic form, a “topic” modelled by software displays word co-occurrences in texts, assuming that the frequency of co-occurrences defines certain areas of meaning. „Doing Digital History with Python III: topic modelling with Gensim, spaCy, NTLK and SciKit learn“ weiterlesen
Doing Digital History with Python II: creating custom Word Clouds
by Monika Barget
In the second edition of Doing digital history with Python, I would like to address word clouds as a visual method of finding patterns in texts (see critical reflection in Basic Text Mining: Word Clouds, their Limitations, and Moving Beyond Them). Word clouds display the frequency or importance of individual keywords in individual texts or entire corpora. There are many ready-made tools in multiple languages that help you create word clouds in different designs, such as the in-built word cloud generator in Voyant Tools or browser-based tools such as Wortwolken.com. However, not all of them may be suitable for your specific use case. „Doing Digital History with Python II: creating custom Word Clouds“ weiterlesen
Doing Digital History with Python I: reading (messy) XML & JSON data
by Monika Barget
During our DH brownbag lunches at the IEG, colleagues have repeatedly asked us if we could recommend Python packages for digital history. We have therefore set up a list of packages we at the IEG DH Lab are using for the analysis of text (stored, for instance, in XML/TEI or JSON formats), the modelling of historical networks, or the creation of interactive maps.
The list Python for digital history is based on our personal experiences and, though by no means exhaustive, may serve as an appetizer for “Doing Digital History with Python”. In a series of blog posts, we will try and introduce you to some of the packages mentioned through case studies from current IEG research.
Today’s post covers the extraction of data from XML and JSON files with xml.etree.ElementTree, lxml, json(5) and beautifulsoup(4) as reading structured text is often a starting point of digital history projects. „Doing Digital History with Python I: reading (messy) XML & JSON data“ weiterlesen