Modul Grundlagen der Digital Humanities
Master Digital Humanties
Trier University
17 Oct 2025
Digital Humanities is the application of digital tools to the work of humanists and the creation of humanist works in the digital realm. Its applications encompass research and teaching. – Aaron Gulyas
For me, Digital Humanities is very similar to an English class just with a computer component to it.
DH is the application and the use of computing tecnologies for the research, teaching and investigation in the disciplines of the humanities.” – Alí Albarrán
Learning and sharing about who we are as human beings, past and present, through digital media and tools; Helping to develop digital techniques, tools, and methodologies that benefit the study of the humanities. – Jeremy Boggs
For me, Humanities Computing means the transformation of creative and informative communication (for all individuals, not just academics, students and other experts) via an ever- and expanding array of information and communication technologies, an amplification of the individual’s power to research, to write, to communicate, to publish and to participate in, and create new spaces in, the public sphere. Just as I feel that the Humanities enrich everyone’s life, humanities transformed by ICTs enrich everyone’s life. – Lesley Mary Smith, George Mason University
DH is part of what once was called ‘auxillary sciences’ in the humanities in the best sense: To know about the theory and methods of carrying out scholarly work in a digital way and in the digital age is prerequisite to all work done. – Torsten Schaßan







“In our testing, we found that frontier model LLMs produce more accurate transcriptions of handwritten historical English language documents than state-of-the-art HTR models on “out-of-the-box” transcription tasks (Tables 1 and 2). To establish a baseline for conventional HTR software, we looked at both transcriptions generated with Trankribus’s older PyLaia model (which is the model most featured in the literature) and its newer and more advanced transformer based Titan Super Model. The PyLaia model transcribed the test data set with an average strict CER of 10.3% and WER of 27.0% which is comparable to the results reported by other authors (Christlein et al 2018; Ó Raghallaigh 2022; Prebor 2023). The Titan model improved on these results by 20% (CER=8.0%) and 26% (WER=19.7%). This means that on the task of achieving a perfect transcription, the Transkribus models correctly transcribed between 90 and 92% of the characters and 73 and 80% of the words in the ground truth document.
In terms of the LLMs, two of the three models tested outperformed the PyLaia model on strict CER while all three did better on strict WER. Only Claude Sonnet-3.5 improved on the Transkibus Titan Super Model, scoring 10% better on character accuracy and 19% on word accuracy, achieving a strict CER of 7.3% and strict WER of 15.9% (Gemini 1.5-Pro-002 also outperformed Titan by 4%). This shows that frontier LLMs can achieve state-of-the-art performance without fine-tuning or training on specific document formats or handwriting styles. While this was significantly better than the conventional HTR results, we should note that it is still about 60% less accurate than the upper error rates reported for non-expert human transcribers (Feng et al 2020; Nordo et al 2017; Oliveira 2018; Stolcke 2017).”
“It emerged that LLMs applied to HTR offer several advantages, including ease of implementation, improved user–model interaction, faster processing times and reduced costs. The differences in workflow, when compared to traditional approaches, could significantly alter how this task is adapted, potentially enabling a single general model to recognize various handwriting styles and languages. Such advancements could enhance HTR predictions and promote a wider adoption in digital libraries.
However, the results of this research show that the feasibility of using both proprietary and open-source LLMs for HTR is skewed towards the English language and mostly on modern handwriting documents, caused by the proportionally unbalanced datasets used during pre-training. Consequently, the performance on other languages and historical documents is consistently weaker, generally producing unusable results. The model which constantly demonstrated the best results overall is Claude Sonnet 3.5. While the accuracy is similar between proprietary and open-source models on modern handwriting and English materials, open-source model performance decreases significantly for historical documents in other languages. Moreover, MLLMs do not demonstrate a consistent and significant capability of autocorrection. In particular, it can be observed that post-corrections produced by open-source models reduced accuracy overall. As for the comparison with Transkribus’ models, it is not possible to generalize if the platform’s models outperform LLMs or vice versa. While LLMs achieved comparable results for English historical handwriting and outperformed Transkribus on modern handwriting and Italian datasets, Transkribus models showed better results on German and multilingual datasets.”
Suchen Sie sich auf der Projekte-Seite des EADH-Verbands ein geeignetes DH-Projekt heraus: https://eadh.org/projects.
Identifizieren einen bestimmten Aspekt des Projekts, in dem ein Modell erstellt wurde. Beschreiben Sie:
Ein paar Beispielqueries auf https://livesql.oracle.com/next/.
SELECT *
FROM HR.COUNTRIES
WHERE REGION_ID = 50 ;
SELECT POSTAL_CODE, CITY, COUNTRY_ID
FROM HR.LOCATIONS
WHERE COUNTRY_ID = 'JP'
SELECT first_name, last_name, job_id
FROM HR.EMPLOYEES
WHERE job_id = 'FI_ACCOUNT'
ORDER BY first_name ;
SELECT HR.EMPLOYEES.first_name, HR.EMPLOYEES.last_name, HR.JOBS.job_title
FROM HR.EMPLOYEES
INNER JOIN HR.JOBS ON HR.EMPLOYEES.job_id=HR.JOBS.job_id
ORDER BY HR.JOBS.job_title ;
Stöbern Sie einmal in den Artikeln eines DH-Journals Ihrer Wahl nach einer Ihrer Meinung nach besonders gelungenen oder aber einer besonders wenig gelungenen Visualisierung. In Frage kommen unter anderem die Zeitschriften “Digital Humanities Quarterly”, “Zeitschrift für digitale Geisteswissenschaften” oder “Journal of Historical Network Research”, Sie können aber auch eine andere Zeitschrift wählen.














In a nutshell the problem with computational literary analysis as it stands is that what is robust is obvious (in the empirical sense) and what is not obvious is not robust, a situation not easily overcome given the nature of literary data and the nature of statistical inquiry. There is a fundamental mismatch between the statistical tools that are used and the objects to which they are applied. (Nan Z. Da, “The Computational Case Against Computational Literary Studies”, Critical Inquiry, 2019)
Sybille Krämer, Der Stachel des Digitalen. Geisteswissenschaften und Digital Humanities (Suhrkamp, 2025)

Rens Bod, A New History of the Humanities. The Search for Principles and Patterns from Antiquity to the Present (2013)

Dominic Widdows, Geometry and Meaning (2004)

Rabea Kleymann, Jonathan Geiger et al. (Hg.): Begriffe der Digital Humanities. Ein diskursives Glossar (ZfdG, 2023)
