
The Critical AI Seminar series will continue in 2025 and 2026 with another four lectures that critically address Artificial Intelligence (AI) from various perspectives – across different contexts of application and through different lenses of critique. With these lectures we hope to once again bring together scholars from around the world in engaging discussions and further contribute to Critical AI Studies as a continuing ‘field in formation’ (Raley and Rhee, 2023).
The seminars are online, open to everyone. For each seminar, one or two prominent invited speaker(s) are invited to give a talk that engages theoretically or empirically with AI.
The seminar series is organised by Anna Schjøtt Hansen, Dieuwertje Luitse and Tobias Blanke, who are part of the Critical Data & AI Research Group at the University of Amsterdam. It is supported by the University of Amsterdam Research Priority Global Digital Cultures and the Amsterdam School for Cultural Analysis and hosted by Creative Amsterdam (CREATE).
Upcoming seminars
You can find the individual events and how to sign up by clicking on the headlines below.
November 12, 17-18:30 (CEST): Invited talk by Louise Amoore and Alexander Campolo, ‘On reading machine learning‘
One of the strengths of Critical AI studies has been a rapid development of methods for addressing the different social and political objects that encompass AI. We now have outstanding studies of datasets, material infrastructures, ecologies, histories, and the political economy of platforms. Rather than naively “reading” model outputs, they account for their conditions of possibility. These studies cut through words—ideologies ranging from hype to doom—to grasp the interplay of interests, materiality, and power that constitute AI. In this talk, we will reflect on the characteristics of this literature, its distinctive tropes, style, and conventions. We then propose critical reading strategies for scholars in the interpretive social sciences and humanities, who, in their own way, face the problem of reading texts for whom they are not the intended audience.
January 13, 17:30-19:00 (CEST): Invited talk by Fabian Offert on ‘Vector Media‘
This talk presents a new history and theory of the vector space in contemporary artificial intelligence systems. I will argue that the inevitable bias of such systems lies not only in what they represent, but in the logic of representation itself. Their internal ideologies are often not directly visible in their generated outputs or even their training data, the focus of almost all existing work. Instead, they emerge from how the model organizes and transforms information within itself. While previous media technologies created new formats or imitated existing ones, deep neural networks instead seek to dissolve prior media into a universal space of commensurability: the vector space. Cultural objects, once specific to a medium, are rendered fungible; commodities in a new neural economy, expressed only in terms of their neural exchange value.
March 18, 17-18:30 (CEST): Invited talk by Helene Ratner and Nanna Thystrup on ‘Ecologies of evaluation’
This talk develops the concept of “evaluation ecologies” to theorize how machine learning (ML) systems are assessed in public sector contexts. Through a conceptual analysis supported by case studies of ML deployments in Danish higher secondary education and Dutch psychiatric clinics, we demonstrate how evaluation practices extend beyond technical assessment to encompass complex negotiations of power, expertise, and accountability. Drawing on theoretical perspectives from Science and Technology Studies (STS) and building upon Halpern and Mitchell’s (2023) work on experimental governance and Amoore’s (2020) analysis of cloud ethics, we advance “evaluation ecologies” as a framework for understanding how ML assessments unfold through multiple, often contradictory registers.
May 20, 12-13:30 (CEST): Invited talk by Thao Phan on ‘Testing-in-the-wild’
This presentation analyses the phenomenon of the AI testbed and practices of “testing-in-the-wild.” It combines historical and sociological approaches to understand how places like Australia have come to be treated as ideal test sites for new AI systems, using commercial drone delivery company Wing Aviation as a case study. It connects the figuration of Australia as a contemporary testbed with histories of the nation as a colonial experiment. I argue that this historical frame has been consistently deployed to justify the treatment of lands and peoples as experimental subjects across a range of domains: techniques of penal management in the nineteenth century, military weapons in the early twentieth century, and AI-driven systems like drone delivery in the twenty-first century. By connecting this history to the present moment, I show how Australia has been variously treated as a test site and Australians as test subjects based on changing imaginaries of the nation and its people, from proxies for whiteness and Empire in the colonial period, to multiculturalism and ethnic diversity in the contemporary era.