Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Doing Digital History with Python IV: web automation

Before digital humanists can do things with data, they first need to collect them, and web automation (or more specific methods of web scraping) can be a quick way of gathering a large amount of data. While web automation denotes every remotely controlled action performed on the web, web scraping, web mining or web harvesting are focussed on reading and processing information (found on websites). This blog post presents useful Python packages for these tasks and explains the advantages of working with browser profiles.

As an early modern historian, my web automation efforts mainly relate to batch-downloads of bibliographic or biographic data from archival services. And I use web scraping to extract details from the result pages I get when running queries in a library catalogue. In order to automate browser activities, you can use multiple tools, and not all of them require in-depth programming skills.

Scrapy, for instance, is a command-line tool for creating a spider to crawl a site and extract data which is written in Python but does not require you to code from scratch. The official documentation is very detailed and beginner-friendly, and Scrapy can also perform API queries (such as Amazon Associates Web Services).

Using Python packages as part of your costum scripts is another option and one that I find particularly convenient. Python scripts for web automation are easy to write and adjust, and you will find plenty of online tutorials and books detailing the process. This is why I don’t want to give another step-by-step introduction here but highlight some overall aspects, pointing you to other resources you might want to consult.

When getting started with web automation in Python, it is important to consider that there isn’t the one go-to package suiting everyone, but that your choice of tools very much depends on what you intend to do. If you are trying to read data from a public web-site that does not require any log-in details, it is probably enough to use the Requests: HTTP for Humans package.

The following example is a script I wrote to download metadata of Irish newspapers from the National Library of Ireland (NLI) website:

# Script to download HTML files by URL with consecutive index numbers

# example: get newspapers data from Irish news archive website

# import packages

import requests
import urllib.request
import shutil
import os

# URL to be called

NLI="http://www.nli.ie/en/NewspapersDetails.aspx?IndexNo="

# define range of index numbers

itemIDs=range(20000)

# open files and save on drive

for ID in itemIDs:
    print(ID)
    url="http://www.nli.ie/en/NewspapersDetails.aspx?IndexNo=%s" % ID
    print(url)
    html=urllib.request.urlopen(url)
    file_name=str(ID)
    with open(os.path.join("C:\\Users\\[USERNAME]\\10_Data analysis_PhD", file_name), 'wb') as f:
        shutil.copyfileobj(html, f)
        print("File no.", ID, "downloaded!")
                    
print("Done")

HTML_download-by-URL-index.py [github]

If, however, you wish to access a service that requires registration, you will probably need to work with several packages at a time. The most popular web automation package for Python is selenium which permits you to handle buttons, check-boxes and drop-down menus on websites. In this way, you can enter a password or start a search engine. Selenium supports Firefox, Chrome and Internet Explorer as well as the Remote protocol. If you are using the same protected service over and over again, there is a way to make log-in really easy: permanently store your credentials in a Firefox or Google profile and use that saved browser profile in your script.

I created the script below to open Firefox with a custom profile in order to avoid pop-up boxes when downloading norm data from the website of the German National Library (DNB):

# Script for scraping .TTL files with biographic information from the DNB website

import selenium # Starting SELENIUM for web automation
from selenium import webdriver
from selenium.webdriver import Firefox
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.select import Select
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.firefox.firefox_profile import FirefoxProfile
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
import time # needed to introduce pauses
import urllib # needed to merge strings to new URLs


# Opening FIREFOX with custom profile (no pop-up windows for download)

binary = FirefoxBinary("C:\\Program Files\\Mozilla Firefox\\firefox.exe")
profile = FirefoxProfile("C:\\Users\\[USERNAME]\\AppData\\Roaming\\Mozilla\\Firefox\\Profiles\\eiwnvwt0.moba")
driver = webdriver.Firefox(firefox_profile=profile, firefox_binary=binary, executable_path='C:\\users\\mobarget\\geckodriver.exe')
driver.implicitly_wait(2)
driver.maximize_window()

print("Browser opened.")

# construct query URL for DNB biographic data containing "geograf"

QueryURL="https://portal.dnb.de/opac.htm?method=showFullRecord&currentResultId=geograf+sortBy+tit%2Fsort.ascending%26any%26persons&currentPosition="

result_IDs=range(0, 10203) # no. of results shown in DNB catalogue

# define target URL for each file

for i in result_IDs:
    target=[QueryURL, str(i)]
    s=""
    targetURL="".join(target)
    print(targetURL)   
    
# navigate to target URL

    driver.get(targetURL)

# find and click link: class="rdfview" (download link for .TTL file)

    try:
        download=driver.find_element_by_class_name("rdfview").get_attribute("href")
        print(download)
# trigger direct download (based on profile settings)        
        driver.get(download)
        print("File no.", i, "downloaded!")
        continue
# exception handling if "rdfview" is not found       
    except NoSuchElementException:
        print('not found')
        pass

# notification that task is complete    
print("All requested items have been downloaded!")
# close browser window
driver.quit() 

Webautomation_scrapingDNB_Firefox.py [github]

You may want to download the script for your own use from my “Digital History” repository on GitHub. Please make sure to replace the absolute Windows file paths for your own file locations. Information on creating, removing or switching Firefox profiles can be found on the Mozilla support site. Mozilla Firefox also offers a password manager but I have found that web automation runs much more smoothly in Google Chrome.

To save passwords in Google Chrome, open the browser, click on “profile” and “passwords” at the top right, and turn on the “save passwords” option. The user profile features in Chrome also permit you to set up a private profile and a professional profile on the same machine, and you can thus create different profiles for different programming tasks or research projects. In this way, you will save a lot of time and make sure that your credentials never directly feature in any scripts you write. This is especially important if you want to share scripts with your students or colleagues.

This is how a personalised Google profile is accessed in a Python script:

# Opening CHROME with custom profile

options=webdriver.ChromeOptions()
options.add_argument("--user-data-dir=C:\\Users\\[USERNAME]\\AppData\\Local\\Google\\Chrome\\User Data\\")
options.add_argument('--profile-directory=Profile 1')
#driver = webdriver.Chrome(ChromeDriverManager().install())
driver = webdriver.Chrome(executable_path='C:\\chromedriver_win32\\chromedriver.exe', options=options)
driver.implicitly_wait(2)
driver.maximize_window()

print("Browser opened.")

Webautomation_addITEMS_HathiTrustCollections.py [github]

If you are having troubles switching between profiles in your scripts, the forum discussion How do I start Chrome using a specified “user profile”? may help.

A recent use case of mine was building featured collections on selected historical topics in HathiTrust. HathiTrust provides access to metadata and full-text sources of mostly older books scanned in libraries around the world. HathiTrust collaborate with Google Books and offer many additional data and data analysis resources to researchers. A featured collection can be the starting point of text analysis in the HathiTrust Data Capsules, but you can also export collection data in JSON format. To find out more, you may want to read the following blog post:

Barget, Monika Renate. ‘Building featured HathiTrust collections with web automation’. Academic blog. INSULAE (blog), 6 October 2020. https://insulae.hypotheses.org/169.



Featured Image: Python bivittatus, Print, between 1700 and 1880, 27,49 x 21,91 cm, Iconographia Zoologica, Special Collections of the University of Amsterdam, Wikimedia Commons.


OpenEdition schlägt Ihnen vor, diesen Beitrag wie folgt zu zitieren:
Monika Barget (7. Mai 2021). Doing Digital History with Python IV: web automation. Digital Humanities Lab. Abgerufen am 13. September 2024 von https://doi.org/10.58079/nl5t


Autor: Monika Barget

Studium der Geschichte, Kunstgeschichte und Katholischen Theologie in Augsburg (Magistra Artium). 2009–2013 Berufstätigkeit in den Bereichen Content Management, Lektorat, Kundenservice und Redaktion. 2013–2017 Promotionsstudium im Exzellenzcluster »Kulturelle Grundlagen von Integration« an der Universität Konstanz. 2017–2018 wissenschaftliches Projektmanagement im Centre for Digital Humanities an der Universität Maynooth (Irland). Seit Januar 2019 wissenschaftliche Mitarbeiterin im IEG (Digital Humanities Lab/Digitale Historische Forschung).

3 Gedanken zu „Doing Digital History with Python IV: web automation“

  1. Leider hat das Code-Display Plug-In meine “indentation” geschluckt. Ich kümmere mich baldmöglichst um eine Lösung. Bitte derweil nur den Code aus GITHUB verwenden. Vielen Dank!

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.