by Johanna Mauermann and Sarah Oberbichler
When it comes to analysing large collections of historical documents – like digitized historical newspapers – language models are incredibly powerful tools. But as these technologies become more common in humanities research, scholars also need to think critically about how they use them.
One big question researchers face when using language models for their analysis is about bias. Described by Rob Kitchin in his book „Critical Data Science,“ bias can be understood as „a consistent pattern of error within a dataset, or within a method of data processing and analysis, that skews findings and interpretation“ (Kitchin, 2024). Biases manifest in AI models on various levels: in the model design, in the data itself, in its application to contexts the model wasn’t trained for, and in the prompts themselves (compare e.g., Ferrer et al., 2021). Addressing bias questions therefore is crucial for ensuring our historical research stands up to scrutiny and Good Scientific Practice demands that we take these challenges seriously.
But how can we be confident that our findings aren’t significantly distorted by the AI tools we’re using? In this blog post, we tackle this crucial question through a systematic evaluation of Large Language Models (LLMs) used to extract articles from multilingual historical newspapers. Our case study focuses on the coverage of the catastrophic 1908 Messina earthquake in southern Italy.
This evaluation framework was developed by Johanna Mauermann (under the supervision of Sarah Oberbichler) during her three-month internship at the Leibniz Institute of European History in Mainz as part of the master program Digitale Methodik in den Geistes- und Kulturwissenschaften at the Gutenberg University Mainz. The goal of the internship project – a side study of the project “Transnational Flows of News: Analysis and Visualization of Historical News across Languages and Countries through Case Studies on (Return) Migration and Environmental/Natural Disasters, 1850-1950” – was to create a comprehensive and critical approach to evaluate prompts and LLM outputs that goes beyond simple output assessment. The framework therefore also addresses fundamental questions about bias:
-
-
- Language Barriers: Do different models have varying capabilities across languages? Does the language of the prompt matter?
- Historical Understanding: How good is the historical context understanding? Are the models able to find articles related to a specific event without providing further historical context?
- Format Preferences: Are LLMs biased towards specific output formats?
-
Understanding the implications of these questions requires an awareness of how LLMs are trained. Just as a library’s value stems from its carefully curated collection, a large language model’s capabilities are shaped by its training data. When we use these models to analyze historical newspapers, their performance largely depends on the quality and scope of the materials they were trained or fine-tuned on. In essence, a language model can only be as good as the knowledge it acquired during its learning phase.
Even though large language models learn from millions of sources during training, they still capture only a fraction of human knowledge. Companies developing these models must make choices about what content to include. Even tech giants with massive resources face a fundamental limitation: they primarily rely on dominant sources like English-language content and widely-used and free available platforms like Wikipedia, while other valuable knowledge sources remain under-represented (Navigli et al., 2023).
As we can see in Fig. 1, Wikipedia contains considerably more English-language content, with sports, music, and places being the dominant domains. Historical content, on the other hand, is by far less presented.

Knowing about those challenges when working with large language models, why do we even consider them for historical research?
As described in an earlier blog post, building large topic-specific corpora with historical newspapers is still very challenging, especially the separation of articles within a newspaper issue. Large language models can be essential to assist with this step, where traditional methods – such as article separation based on layout information – have been problematic so far.
To evaluate the quality and potential biases in LLM outputs, we had to develop a systematic approach: creating a test dataset, designing effective prompts, and building a comprehensive evaluation framework.
Step 1: Building an Evaluation Dataset: The 1908 Messina Earthquake
In order to be able to systematically evaluate the effectiveness of LLMs with a broad variety of prompts, we first created a topic-specific evaluation dataset containing newspaper pages in German, English and French. As a topic we chose the historical earthquake in Southern Italy 1908.

Some context to this historical event: In the early hours of December 28, 1908, a massive earthquake struck Southern Italy, leaving widespread destruction in its wake. Messina, a city on the island of Sicily, was hit hardest by the earthquake which was followed by a devastating tsunami and fires that raged for days. Entire neighbourhoods were leveled, and over half of the city’s 140.000 residents lost their lives. The devastation extended across the Strait of Messina to Reggio Calabria and nearby villages, with over 200.000 individuals killed, injured or left without shelter. This disaster, the deadliest earthquake ever recorded in Europe, captured global attention through extensive news reports.
This extensive coverage makes the Messina eartquake an excellent case study for building a multilingual dataset, but also presents significant challenges, as often articles reporting on the Messina earthquake are exceptionally long and the event‘s complexity demands substantial contextual knowledge in order to identify relevant articles.
We focused on data from three prominent newspapers of the time: The Dresden Daily (English), Le Petit Parisien (French), and Norddeutsche Allgemeine Zeitung (German). We collected the data using APIs from the German Digital Library and the French National Library. Topic-specific keywords (“tremblement*”, “messine”, “séisme”, “earthquake*”, “messina”, “Erdbeben*”, “Messina”) were used to extract newspaper issues that potentially contain relevant articles, and we included newspaper issues from the day of the earthquake until the end of 1909. We manually created a “ground truth” as the benchmark for evaluation, which involved meticulously reviewing the newspaper issues and identifying and extracting relevant articles.
The final dataset consists of 875 entries containing context windows around the used keywords from 528 different newspaper issues.
Step 2: The Quest for the Perfect Prompt
In The Quest for the Perfect Prompt, we set out with a clear (if unrealistic) goal: to create a prompt that could extract all topic-relevant articles exactly as they appear in the newspaper – no omissions, no additions, no changes. There are many key considerations to take into account:
-
-
- Defining the Task: What constitutes an article?
- Relevance Criteria: When is an article relevant to the Messina earthquake?
- Handling Missing Data: What should the output look like if no relevant article is found on a newspaper page?
- Dealing with Fragmentation: How should incomplete or split articles (spread across multiple paragraphs) be processed?
- Explainability: How did the model arrive at its results?
- Additional Concerns: How to detect article boundaries, address edge cases, formatting preferences, deal with uncertainty etc.
-
After defining the tasks for the model (which closely mirrored our guidelines for manually annotating the evaluation dataset), we also had to think about further strategies:
-
-
- Prompting Strategy: Do we use zero-shot or few-shot prompts? Do we include system prompts?
- Prompt Language: Writing prompts in the language of the source or in English?
- Historical Context: How much historical context to the Messina earthquake do we provide within the prompt?
-
Given the complexity of this task, the search for a prompt capable of extracting relevant articles began as unpredictably as the tremors of the Messina earthquake, but over time, it evolved into a process with clearer outcomes.
Example for a complex prompt in English:
Please follow these specifications: Definition of an article: An article is a semantic unit in the text, clearly distinguished from preceding and following content (e.g., through its own headline). Relevance criteria: An article is relevant if it has the Messina earthquake of December 1908 or its consequences as a topic. Relevant consequences can include: - Effects on the population (e.g., health crises, forced relocations, relief efforts and donations - Aftermaths and consequenses - Political and economic developments related to the earthquake Response format: - If one or more relevant articles are found, structure your response using XML tags as shown in the following example, using the tags article, verification, and human_verification_needed (True or False): <article>complete extracted article content</article> <verification>Is unit coherent? Is topic present? Is article complete? All articles found?</verification> <human_verification_needed>False</human_verification_needed> - Return all relevant articles in their original form, without additions, omissions, corrections, or comments. - If no relevant articles about the Messina earthquake are found (e.g., if it concerns another earthquake), no special structuring is needed; simply return "No relevant article found." without further explanations. Notes on segmentation: - Ensure that articles spread across multiple paragraphs are treated as a single unit. Human verification needed: - Can have the values "True" or "False" - False: If you believe you have correctly segmented the article and assessed its relevance. - True: If you are unsure whether you have captured the complete content of the article as contained in the newspaper document or whether it is relevant.
Example for a simple prompt in English:
Please identify and extract articles that relate to the severe Messina earthquake in December 1908 or its aftermath in the provided newspaper document. If you don't find any relevant articles relating to this topic, simply return 'No relevant article found.' If you find one or more articles, return their full, unchanged content (beginning to end) structured in xml format, each wrapped in <article> tags.
To refine the prompt design, we created a small-scale test dataset for each language, consisting of 15 newspaper pages and including edge cases such as extremely long or fragmented articles. Although this dataset may not yield universally transferable results and carries a certain risk of overfitting – where the prompt becomes too tailored to this specific dataset – it allowed for faster evaluation of strategies and avoided wasting computational resources. What followed was a process of iterative refinement:
-
-
- Testing prompts to identify what worked and what didn‘t.
- Analyzing failure modes (e.g., why irrelevant articles were selected while relevant ones were missed).
- Refining prompts.
-
Overall, it is unclear whether the “perfect” prompt for article extraction exists. The quest for the perfect prompt is an iterative process, and there is no clear endpoint since there is always another prompting strategy, a parameter left unadjusted, or a different wording to try. Perhaps the best approach is to stop once the results are good enough, rather than striving for perfection.
We tried many prompting strategies with many different models, out of which clearly, some worked better than others. While, for the comparison, we used the same prompts for each model, we learned that different models need different prompts: Larger models (e.g., GPT-4o) often performed as well, if not better, with minimal prompts not containing any examples or contextual details, while smaller, local models required more detailed instructions, often illustrated with examples. More about our findings can be found in the section What did We Learn? But before we dive into that, we’d like to shed light on how we developed our evaluation framework, which enables us to quantitatively assess prompt performance.
Step 3: Building a Systematic Evaluation Framework
When it comes to the separation of articles in historical newspapers, several approaches exist (e.g. Girdhar et al., 2023; Max Weidemann et al., 2019; Sun et al., 2024). Those article separation methods do layout analysis or combine layout analysis with semantic processing. We’re taking a different route – using just OCR text, no layout information. This approach lets us process faster, target specific articles, and flexibly define what counts as an article through prompts. Since no systematic evaluation of this kind of information extraction exists yet, we needed to develop our own methodology. A big help in this process was Carlos-Emiliano González-Gallardo (Associate Professor at the University of Tours, France) who helped with advice or confirmation of chosen solutions.
On a very simplified level, the evaluation framework we developed works as follows:

The code further handles various edge cases like missing data, multiple articles per page, and format mismatches. It uses fuzzy matching to account for OCR imperfections and provides detailed diagnostics about extraction quality.
The process outlined in figure 4 was carried out using various mid-size LLMs, either locally or via API access, along with a range of different prompts in different languages. For most possible transparency, we document each step, including the model version used (e.g. Llama 3.1 70B or GPT 4o), the API providers, or GPU specifications when processed locally:

Here we present some of our first results using our German dataset. We used six different models with German and English as well as simple and complex prompts. The models we used for the analysis are:
-
-
- GPT 4o via OpenAI API
- Llama 3.1 70B via Ollama (locally) with eight Nvidia Tesla A100 40G GPU
- Nvidia Nemotron Llama 3.1 70B Instruct via Ollama (locally) with eight Nvidia Tesla A100 40G GPU
- Qwen 2.5 72B via Ollama (locally) with eight Nvidia Tesla A100 40G GPU
- Claude 3.5 via Anthropic API
- DeepSeek R1 via DeepSeek API
-
What did we Learn?
Our evaluation of various large language models reveals critical insights into their performance, biases, and limitations. By systematically testing these models, we identified key patterns and challenges that are essential for understanding and mitigating AI bias. Below, we summarize some of our preliminary findings and their implications.
When we began this investigation, we expected to primarily find biases in data selection and therefore in representation. Our focus was on language biases and domain expertise gaps, particularly in historical knowledge. However, our systematic evaluation revealed that other model design choices played an equally – if not more – significant role in creating biases than we had anticipated.
Data Selection Bias
It is assumed that LLMs struggle with nuanced understanding of other languages than English because of the over-representation of training material in English. Similarly, models exhibit domain biases based on their training source material selection. From our systematic evaluation on the German data, we learned that:
Overall Results Show that All Models Deal Well with German Sources: Looking at Fig. 5, all models except Claude achieve success rates of 80% or higher, depending on prompt and language. This shows that all these models can effectively process historical German texts, suggesting their understanding isn’t limited by language capabilities. This is also true for Claude and we will soon explain what other kind of biases screwed those results. Fig. 5 further shows that there’s no clear universal winner between English and German prompts – it varies by model. However, we also found that a combination of prompt language by presenting an English system prompt and a German user prompt, had a negative effect on the performance of several models. But in general, models generally show similar performance between languages used in the prompt, with slight variations.

When looking at individual tasks separatly, however, we get a more nuanced picture. Fig. 6 compares how well the model marked content as irrelevant or extracted articles (full or partially) that were relevant. Depending on the model, either German or English prompts worked better for both tasks. DeepSeek R1 stands out for finding relevant articles but was less successfull in correctly marking irrelevant content (here better in English). GPT with the mixed language prompt and Nemotron model with the most complex English prompt were able to get good results for both tasks together.

The Qwen2.5 72B model and Claude 3.5 Sonnet, on the other hand, stand out for working better (in a combination of both tasks) with simple prompts either in German or in English. This means that also those models have enough German understanding to find classify relevant and non relevant German texts but have bigger problems follow the relevance descriptions given in the prompts, no matter which language.
All Models Seem to Have a Good Historical Contextualization – but Only on the Surface: To better understand a models’ ability to contextualize historical texts, we conducted a smaller qualitative analysis of their explanatory responses. This investigation yielded some interesting insights. We discovered that the models prioritized geographical or temporal reasoning over historical reasoning, which indeed might be a result from poorer historical representation in the training data. As we saw in Fig. 1, places are highly represented in Wikipedia sources. Representation matters, because due to over-representation of specific domains, the model is more likely to activate over-represented sources than other sources, even though other sources might be more useful. Some examples:
Example 1 from Deepseek: "Looking through the text, I see a section that starts with "Erdbebenkatastrophe in Sizilien und Kalabrien." That translates to "Earthquake disaster in Sicily and Calabria." That's definitely relevant because Messina is in Sicily, and the earthquake happened in 1908."
Example 2 from Deepseek: Next, there's a section about earthquakes in Marseille and the Bismarck Archipelago. Marseille experienced new tremors on the evening of July 10th, similar in strength to previous ones, causing fear among the population. The Bismarck Archipelago also had a significant earthquake, but with no reported casualties. I need to check if these earthquakes are the Messina earthquake. The Messina earthquake occurred on December 28, 1908, so the dates here are July 10th and around that time, which is about six months later. Therefore, these are different events.
Example 3 from Nemotron: The article's relevance to the specified topic (Messina earthquake of December 1908) is partial, as it broadly discusses earthquakes and mentions Southern Italy, which might imply a connection. However, it does not specifically focus on the Messina earthquake of December 1908. Human verification is recommended to assess the article's relevance accurately.
Exemple 4 from Deepseek: Looking at the provided document, it's in German. I can see mentions of San Francisco and the year 1908. Wait, San Francisco had a major earthquake in 1906, not 1908. The Messina earthquake was in 1908, but this text is talking about San Francisco's reconstruction efforts after their earthquake. So, the topics are different.
Other Model Design Biases
Digging deeper into our findings, we also found that several other model design choices can significantly skew findings. Although biases not directly tied to data are less frequently discussed in scientific discourse (Hooker, 2021), we observed that for information extraction tasks like article extraction, model design bias – often caused through choices made to maximize test-set accuracy, helpfulness or safety – had a much higher impact than data selection bias. These design-related biases, often subtle and harder to detect, arise from choices in model architecture, training procedures, and task formulation, and they can profoundly influence the model’s output, independent of the underlying data.
Format Sensitivity: Our experiments showed that structuring both prompts and outputs had an impact on the results. Specifying a particular output format can enhance performance, but not all formats work equally well. For example, wrapping article content in XML tags like <article> proved effective across all tested models, while JSON format produced more inconsistent results. We also noticed that the Nvidia Nemotron Llama 3.1 70B Model had a stronger tendency towards markdown formatting.
Built-in Safety Constraints of the Model Affect the Output: There were a few surprises in the prompt engineering process. For instance, we had anticipated the latest Claude model to perform significantly better than it did. However, our exploration revealed that the model was often reluctant to output complete article texts. Even when explicitly instructed not to omit any content, it frequently inserted […] instead of providing the full text. This can be also seen very well in the output evaluation, where Claude overall had the worst results for under-extraction (see Fig. 7).

Confidence Bias in Models: We explored models’ ability to self-evaluate by asking them to request human verification when uncertain. While previous research suggests the larger the models the better at self-verification (Kadavath et al., 2022), our findings (Fig. 7) reveal a different pattern. Notably, both the mid-sized Nemotron Llama 3.1 70B and the very large Deepseek R1 (671B) – models that emphasize reasoning and problem-solving – showed high rates of requesting human verification when uncertain.

This suggests a more nuanced relationship between model size, training approach, and self-awareness than in other studies suggested. The following examples illustrate this contrast: Nemotron demonstrated strong self-awareness by acknowledging partial article extraction and expressing uncertainty about relevance. Conversely, Claude 3.5 Sonnet showed overconfidence by rating an self-truncated article (indicated by […]) as highly complete (4/5) without requesting verification. These findings suggest that the training approach plays a significant role.
Claude 3.5 Sonnet:
<article>Schilderungen aus dem Erdbebengebiet. Die italienische Presse ist angcfüllt von Berichten aus dem Gebiet der gewaltigen Katastrophe in Sizilien und Kalabrien. In der Neapeler Ora veröffentlicht der aus Florenz stammende Arzt Bruno Roffo eine Schilderung der Katastrophe in Messina, der wir nach der Frankfurter Zeitung folgendes entnehmen: Ich wollte um b Uhr 20 Min.' nach Taormina abreifen. Um b Uhr war ich also schon angekleidet. Meine Mutter und meine Schwester schliefen noch. Plötzlich durchbrach ein schreckliches Geräusch, als wenn 100, ja 1000 Bomben geplatzt wären, das tiefe Schweigen der Nacht. Ihm folgte ein Wolkenbruch und ein Zischen, als hätte man tausend glühende Eisenstangen In Wasser getaucht. Ehe ich mir über die Ursache klar wurde, hob sich mein Bett, durch unsichtbare Gewalten gezogen, zur Decke. Da wußte ich Bescheid. Erdbeben! [...]</article> <verification>Unit is coherent: 5/5 Topic is present: 5/5 Article is complete: 4/5 All articles found: 4/5</verification> <human_verification_needed>False</human_verification_needed>
Nvidia Nemotron Llama 3.1 70b Instruct
<article>Die Rede Macaggi« rief im Hause so großen Lärm hervor, daß der Präsident die Sitzung sür einige Zeit unterbrechen mußte. Nach Wiederaufnahme der Sitzung erklärte Minister. Präsident Giolitti, die gegenwärtige Debatte sei nützlich, da» Parlament könnte seine Meinung zum Ausdruck bringen und die Regierung erkennen, ob sie sich auf die Mehrheit vertaffen könne oder nicht. Giolilti erörterte dann die gegen die Regierung vorgcbrachten Kritiken und erklärte u. a. bezüglich der staatlichen Maß. nahmen bei Gelegenheit des Erdbeben» in Süditalien, daß bei diesem Unglück, da» in der Geschichte beispiellos dastehe, das Hülfswerk nicht vollständig und lückenlos sein konnte. (Zustimmung).</article> <verification> Is unit coherent? **Yes** Is topic present? **Partially** (consequences of the earthquake in South Italy are mentioned, but the main topic is a political debate) Is article complete? **No** (the article seems to be a part of a larger political debate, but only a small section mentions the earthquake) All articles found? **Unknown** (only one relevant section was found, but the document might contain more) </verification> <human_verification_needed>**True**</human_verification_needed> **Note**: The extracted article is partially relevant as it mentions the earthquake in South Italy (likely referring to the Messina earthquake of December 1908) in the context of governmental measures and criticism. However, the primary focus of the content is a political debate, making the topic presence partial. The article's completeness is also questionable since it appears to be an excerpt from a broader discussion. Human verification is recommended to ensure no additional relevant content was overlooked and to assess the article's completeness accurately.
Interestingly, Claude had no problem in finding problems with the given content, e.g., when the ground truth text ended abruptly (we used the original OCR texts as we found them) but did not know where itself truncated texts:
Claude 3.5 Sonnet:
<verification>Der Artikel ist kohärent und beschreibt die unmittelbaren Folgen des Erdbebens von Messina 1908, einschließlich der Auswirkungen auf die Bevölkerung und die Landschaft. Das Hauptthema ist eindeutig das Erdbeben und seine Folgen. Der Artikel scheint am Ende abgeschnitten zu sein, was auf eine unvollständige Digitalisierung hinweist.</verification> <human_verification_needed>True</human_verification_needed>
Last but not least, the quality of our input sources – historical newspapers – significantly impacts our findings. OCR errors in crucial classification terms like place names, or overall poor text quality, can substantially reduce model performance. Therefore, we strongly recommend applying OCR post-correction before classification to improve results.
What are the Limitations? A Critical Look at LLMs
While large language models in our experiment proved very effective for topic-specific multilingual article extraction, there are some important limitations that must be considered.
Firstly, the limited scope of our experiment affects the generalizability of our results. We used a specific, topic-centered dataset about the Messina earthquake, so how a model performs on this data may not necessarily reflect its performance on other, more diverse datasets. However, it shows that extracting articles from OCRed text has become possible and can help researchers to build their corpora when applied correctly. On a similar note, we focused solely on three major languages – English, German and French – which all use the Latin alphabet and have extensive test data available that the models could be trained on. While our results from experimenting with the German subset suggest that language bias is not a significant issue, the situation may differ when working with less represented languages or those using different writing systems.
Another important point to consider is the black box nature of LLMs. We lack visibility into the specific data used for training, making it difficult to fully understand what the model was trained on or why a model arrives at a particular conclusion. Active engagement and critical questioning is key when working with AI models. Think of it like observing a cloud: you can‘t fully see through it, but by watching it closely, you get a sense of its behavior – dark clouds often signal rain, and certain model designs tend to produce certain outputs. Examining the biases and limitations of LLMs through specific historical case studies enables a more nuanced understanding of their outputs and enhances our ability to critically evaluate their reliability. Additionally, both quantitative and qualitative assessments of LLMs’ performance on historical tasks can inform targeted fine-tuning strategies to improve, for example, their historical representation and utility for scholarly research.
It is crucial to acknowledge that while LLMs can be powerful research tools, their outputs require careful human verification. Although some of the evaluated models achieved accuracy rates above 85% for the article extraction task, they still exhibited notable limitations. These included false negatives (missing relevant articles), false positives (incorrectly identifying irrelevant articles), and under-extraction (incomplete article extraction). Therefore, while LLMs can significantly accelerate the research process compared to manual methods, human oversight remains essential to ensure comprehensive and accurate results. The researcher’s role in verifying and validating model outputs cannot be understated.
As discussed earlier, bias in the context of LLMs can take many forms – whether in model design, training data, or prompt formulation – and addressing these biases is critical to ensuring the integrity of research. Therefore, to ethically harness the full potential of LLMs in historical newspaper research, it is crucial to maintain transparency in their use, meticulously document the research process, and stay mindful of potential biases when interpreting the results. An example template of such a documentation is shown in the AI model Research Documentation Sheet (Oberbichler, 2024).
Literature
- Girdhar, N., Coustaty, M., & Doucet, A. (2023). Benchmarking NAS for Article Separation in Historical Newspapers. In D. H. Goh, S.-J. Chen, & S. Tuarob (Eds.), Leveraging Generative Intelligence in Digital Libraries: Towards Human-Machine Collaboration (Vol. 14457, pp. 76–88). Springer Nature Singapore. https://doi.org/10.1007/978-981-99-8085-7_7
- Hooker, S. (2021). Moving beyond “algorithmic bias is a data problem.” Patterns, 2(4), 100241. https://doi.org/10.1016/j.patter.2021.100241
- Kadavath, S., et al. (2022). Language Models (Mostly) Know What They Know. arXiv. https://doi.org/10.48550/arXiv.2207.05221
- Kitchin, R. (2024). Critical data studies: an A to Z guide to concepts and methods. Polity. ISBN: 978-1-509-56654-9
- Max Weidemann, Johannes Michael, & Bastian Laasch. (2019). CITlabRostock/citlab-article-separation: Article Separation. Zenodo. https://doi.org/10.5281/zenodo.3520703
- Navigli, R., Conia, S., & Ross, B. (2023). Biases in Large Language Models: Origins, Inventory, and Discussion. J. Data and Information Quality, 15(2), 10:1–10:21. https://doi.org/10.1145/3597307
- Oberbichler, S. (2024). AI Model Research Documentation Sheet (AIRDocS). https://doi.org/10.5281/zenodo.14550113
- Sun, W., Tran, H. T. H., González-Gallardo, C.-E., Coustaty, M., & Doucet, A. (2024). LIAS: Layout Information-Based Article Separation in Historical Newspapers. In A. Antonacopoulos, A. Hinze, B. Piwowarski, M. Coustaty, G. M. Di Nunzio, F. Gelati, & N. Vanderschantz (Eds.), Linking Theory and Practice of Digital Libraries (pp. 256–272). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-72437-4_15
Featured image: Image generated by dall-e-3. Prompt: “A water colour drawing of language and historical bias in large language models (LLMs)”.
Cite as: Johanna Mauermann & Sarah Oberbichler (31. Januar 2025). LLM Biases: Expected and Unexpected Model Design Effects in Historical Newspaper Article Extraction on the Messina Earthquake. DH Lab. https://doi.org/10.58079/137qr
Please use the citation reference above. The reference below is generated automatically and does not take multi-authorship into account.
OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Sarah Oberbichler (31. Januar 2025). LLM Biases: Expected and Unexpected Model Design Effects in Historical Newspaper Article Extraction on the Messina Earthquake. DH Lab. Abgerufen am 7. Februar 2025 von https://doi.org/10.58079/137qr