by Johanna Mauermann and Sarah Oberbichler
When it comes to analysing large collections of historical documents – like digitized historical newspapers – language models are incredibly powerful tools. But as these technologies become more common in humanities research, scholars also need to think critically about how they use them.
One big question researchers face when using language models for their analysis is about bias. Described by Rob Kitchin in his book „Critical Data Science,“ bias can be understood as „a consistent pattern of error within a dataset, or within a method of data processing and analysis, that skews findings and interpretation“ (Kitchin, 2024). Biases manifest in AI models on various levels: in the model design, in the data itself, in its application to contexts the model wasn’t trained for, and in the prompts themselves (compare e.g., Ferrer et al., 2021). Addressing bias questions therefore is crucial for ensuring our historical research stands up to scrutiny and Good Scientific Practice demands that we take these challenges seriously. „LLM Biases: Expected and Unexpected Model Design Effects in Historical Newspaper Article Extraction on the Messina Earthquake“ weiterlesen