Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Doing Digital History with Python III: topic modelling with Gensim, spaCy, NTLK and SciKit learn

by Monika Barget

In April 2020, we started a series of case studies to introduce researchers working with historical sources to data analysis and data visualisation with Python. Today’s blog post covers topic modelling with the Python packages Gensim, spaCy, NLTK and SciKit learn.

Topic modelling is one of the central methods of Natural Language Processing (NLP), the “automatic manipulation of natural language, like speech and text, by software.” (Jason Brownlee: What Is Natural Language Processing?, in: Deep Learning for Natural Language Processing, 22nd September 2017) In its most basic form, a “topic” modelled by software displays word co-occurrences in texts, assuming that the frequency of co-occurrences defines certain areas of meaning. If “country” and “border” often co-occur in your corpus, the text might be about politics, or more specifically: international relations or migration. If “country” and “riding” co-occur, the text is more likely covering hunting or other outdoor activities.

The context provided by topic models can help researchers ambiguate word usage or identify words with similar meanings in large corpora. As a historian, I am using topic modelling to identity major themes in text corpora that I am not yet familiar with, and to compare the content of sources on a macro-level. Over time, different algorithms and toolkits have been developed to address different needs such as multilingual corpora or corpora in older language variants that require additional lexical features. A standard toolkit widely used for topic modelling in the humanities is Mallet, but there is also a growing number of Python packages you may want to check out.

The Python topic modelling package richest in features is Gensim, which was specifically created for “topic modelling, document indexing and similarity retrieval with large corpora”. Like most Python packages for data analysis, it depends on NumPy and Scipy. Topic modelling with Gensim is widely covered in online tutorials, exemplifying the analysis of very different corpora. Selva Prabhakaran, for instance, covers topic modelling of modern texts that contain email addresses and thus require the removal of the @ sign and other special characters. A useful package for text preprocessing is spaCy, which offers non-destructive tokenization, named entity recognition, pretrained word vectors, syntax-driven sentence segmentation etc. Therefore, spaCy is often used in combination with Gensim. The topics created by Gensim follow the Latent Dirichlet Allocation (LDA) algorithm which performs very well if you wish to identify topics in a large text corpus.

Another package that is often used for topic modelling in combination with others is NLTK, Python’s Natural Language Toolkit, which I especially recommend because of its well-built and expandable stopword lists. Stop word lists are vital to exclude prepositions, auxiliary verbs and other expressions that are not directly content-relevant from the analysis. Otherwise you may end up with topics that exclusively contain words like “the”, “is”, “are”, “in”, “and” or “from”. Most topic modelling tools thus provide integral stop word lists for wide-spread use cases such as the analysis of present-day English-language newspaper reports. These stop-word lists, however, fall short when you are dealing with minority languages or early modern German.

For my own topic modelling in (early modern) history, I usually require stop words in more than one language and need to include older word forms that are not caught by modern dictionaries. With NLTK, you can either import customised stop word lists in .txt format or expand existing lists within the script:

import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')

# call custom NLTK stopword list, stored locally in nltk-data folder

my_stopwords=stopwords.words('en_fr_de')

# extend list if necessary

my_stopwords.extend(['@', 'forschungsbereich', 'frage']) 
print(my_stopwords)

Apart from Gensim, NLTK also works well with Python’s SciKit learn package. SciKit learn is also based on the standard packages NumPy, SciPy and matplotlib and can be used for various data analysis tasks. This package is recommended for LDA topic modelling in combination with Non-Negative Matrix Factorization (NMF). The algorithms applied here are detailed in the SciKit learn documentation. Generally speaking, a combination of LDA and NMF is useful when analysing smaller data sets. The topic modelling functionalities of SciKit learn as opposed to Gensim are described in Aneesha Bakharia’s “Topic Modelling with Scikit Learn”.

For a paper on feminist DH, I have used SciKit learn to analyse comparatively small numbers of relatively short texts (letters, postcards and telegraphs) whose overall vocabulary was limited in size but included many spelling variants and foreign-language expressions. The script I wrote was based on a tutorial by Allen Riddell whose initial URL has expired:

#!/usr/bin/env python
# coding: utf-8

# script for topic modelling based on Scikit Learn and NLTK stopword list
# adapted from tutorial provided by Allen Riddell (https://github.com/ariddell)

# import necessary modules

import os
import numpy as np
import sklearn.feature_extraction.text as text # submodule gathers utilities to build feature vectors from text documents
from sklearn import decomposition
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')

# call custom NLTK stopword list

my_stopwords=stopwords.words('en_letters')
my_stopwords.extend(['x2014', '39', 'st', 'quot', 'us', 'didn', 'couldn', 'shouldn', 'wouldn', '000']) # extend list if necessary
print(my_stopwords) # show updated stop word list (optional)

# read .txt files to be analysed from directory

CORPUS_PATH=("C:\\Users\\mobarget\\Google Drive\\ACADEMIA\\7_FeministDH for Susan\\7_Transcriptions from JSON files\\JSON_all_files_TXT")

filenames=sorted([os.path.join(CORPUS_PATH, fn) for fn in os.listdir(CORPUS_PATH)])
print(len(filenames)) # count files in corpus
print(filenames[:10]) # print names of 1st ten files in corpus

vectorizer=text.CountVectorizer(input='filename', stop_words=my_stopwords, min_df=1) # apply stopword list and vectorise data

# min_df: float in range [0.0, 1.0] or int, default=1
# When building the vocabulary ignore terms that have a document frequency 
# strictly lower than the given threshold. This value is also called cut-off
# in the literature. If float, the parameter represents a proportion of documents,
# integer absolute counts. 
#This parameter is ignored if vocabulary is not None

dtm=vectorizer.fit_transform(filenames).toarray() # defines document term matrix

vocab=np.array(vectorizer.get_feature_names())

print(dtm.shape)

print(len(vocab)) # vocabulary size of corpus

num_topics=18 # no. of topics

num_top_words=25 # no. of words per topic to be displayed

clf=decomposition.NMF(n_components=num_topics, random_state=1) # Non-Negative Matrix Factorization (NMF)

# for full documentation see: 
# https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html#sklearn.decomposition.NMF

# random_state : int
# RandomState instance or None, optional (default=None)
# If int, random_state is the seed used by the random number generator; 
# If RandomState instance, random_state is the random number generator; 
# If None, the random number generator is the RandomState instance used by np.random.

doctopic=clf.fit_transform(dtm) 

topic_words=[] # list of most prominent words associated with topics
for topic in clf.components_: 
   
    word_idx=np.argsort(topic)[::-1][0:num_top_words]
    topic_words.append([vocab[i] for i in word_idx])
    
print(topic_words) # output results
    
doctopic=doctopic/np.sum(doctopic,axis=1,keepdims=True)
    
text_names=[] # names of texts associated with each topic, corresponding to topic shares in MALLET    
for fn in filenames:
    
    basename=os.path.basename(fn)
    name, ext=os.path.splitext(basename)
    name=name.rstrip('0123456789')
    text_names.append(name)

text_names=np.asarray(text_names) # turn into array to use NumPy function
        
print(text_names) # list all file names in corpus (optional)
        
doctopic_orig=doctopic.copy()
        
print(doctopic_orig)
        
num_groups=len(set(text_names))
print(num_groups) 
        
doctopic_grouped=np.zeros((num_groups, num_topics)) 
        
for i, name in enumerate(sorted(set(text_names))):
    doctopic_grouped[i,:]=np.mean(doctopic[text_names==name,:],axis=0)
            
doctopic=doctopic_grouped
            
texts=sorted(set(text_names))
            
print("Top NMF topics in ...") 
            
for i in range(len(doctopic)):
    top_topics=np.argsort(doctopic[i,:])[::-1][0:3]
    top_topics_str=' '.join(str(t)for t in top_topics) 
    print("{}:{}".format(texts[i],top_topics_str)) # show top topics per text in corpus

for t in range(len(topic_words)):
    print("Topic{}:{}".format(t, ' '.join(topic_words[t][:31]))) # show topic words according to indicated range

print("done") # confirm successful completion

Topic Modelling_Python_adapted-script [github]

As I was struggling with the in-built stop word lists, I decided to integrate combined English, French and German NLTK stopwords following a discussions with developers and other users in the SciKit learn GITHUB issues.

The number of eighteen topics (as indicated in the script) turned out to be the most meaningful result for the full collection of transcribed letters available to me at the time, but for the analysis of the much smaller “Charlie Daly” collection of Irish-Republican letters, only five topics made sense. These optimal numbers of topics were identified by running the script several times and semi-manually comparing the results. I used a separate Python script to count the number of identical words in each topic as a rough guideline, but as words like “poor” could come up in very different contexts from social injustice to religion, topic diversity wasn’t quite enough.

There are several ways to automatically calculate topic coherence in Python, and Gensim makes it very easy to access these features. (Shashank Kapadia: Evaluate Topic Models: Latent Dirichlet Allocation (LDA). A step-by-step guide to building interpretable topic models, 19th August 2019) Gensim’s in-built compute_coherence_values() function helps you evaluate multiple models. SciKit learn, for its part, does not include a coherence value calculator and requires a work-around. This is why creating topics with Gensim may be preferrable if you do not know your corpus well and depend on quantitative methods to make sense of it. My very condensed data set, which also included French, Irish and German expressions, however, was better judged by humans.

These limitations of what automated topic analysis can achieve will most likely apply to the majority of historical use cases, especially where pre-modern sources are concerned. Unfortunately, advice on historical topic modelling covering texts written prior to the 1800s is still scarce. In current historical research, topic modelling is extensively used for the analysis of 19th- and 20th-century newspaper collections. (Cf. Integrating New Methods in Historical Research (part 2): Exploring Newspapers with Topic Modeling by Milan van Lange, 22nd May 2019) The German paper Einsatz von Topic Modeling in den Geschichtswissenschaften: Wissensbestände des 19. Jahrhunderts by Martin Fechner und Andreas Weiß (Dezember 2017) is nevertheless useful for historians of all time periods.

For further reading, I also recommend Topic Modeling on Historical Newspapers by Tze-I Yang, Andrew Torget and Rada Mihalcea (June 2011), Historizing topic models: A distant reading of topic modeling texts within historical studies by Rene Brauer und Mats Fridlund (2013), and Topic Modeling and the Historical Geography of Scotland by Michael Gavin and Eric Gidal (2016) who use topic modelling for 18th and 19th century Scottish topographical sources.



OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Monika Barget (15. Januar 2021). Doing Digital History with Python III: topic modelling with Gensim, spaCy, NTLK and SciKit learn. Digital Humanities Lab. Abgerufen am 5. Oktober 2024 von https://doi.org/10.58079/nl5m


Autor: Monika Barget

Studium der Geschichte, Kunstgeschichte und Katholischen Theologie in Augsburg (Magistra Artium). 2009–2013 Berufstätigkeit in den Bereichen Content Management, Lektorat, Kundenservice und Redaktion. 2013–2017 Promotionsstudium im Exzellenzcluster »Kulturelle Grundlagen von Integration« an der Universität Konstanz. 2017–2018 wissenschaftliches Projektmanagement im Centre for Digital Humanities an der Universität Maynooth (Irland). Seit Januar 2019 wissenschaftliche Mitarbeiterin im IEG (Digital Humanities Lab/Digitale Historische Forschung).

Ein Gedanke zu „Doing Digital History with Python III: topic modelling with Gensim, spaCy, NTLK and SciKit learn“

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.