Beitrag
06/03/2025
09:46 PM Uhr

Big Tech’s Gaze and Representational Self-Defence

How can we resist Big Tech’s sovereignty over interpretation and representation online? Can we develop artistic strategies of deception, refusal, or resistance? Can we reorient their systems to serve our agency, rather than exploit it?

Language modeling has impressively fueled the AI hype in recent years, but it has also established a new dominant paradigm that reaches far beyond spooky chatbot demos. Search engines, recommendation and moderation systems, generative AI for text, images,

and video, as well as the more hidden digital infrastructures of everyday life, all rely on ongoing advances in how machines understand context through methods of semantic representation.

Questions of interpretation and representation, traditionally explored in the arts, related academic fields, and public discourse, are now being translated into automated systems that operate under the assumption of objectivity and technological neutrality. In reality, these systems are often shaped by biases, misinterpretations, and misrepresentations. As a result, they tend to reproduce and reinforce discriminatory and hegemonic structures and existing asymmetries of power.

Integrated into highly complex architectures, these models that interpret meaning are a central part of the systems that generate, curate, and verify our media realities. In addition to their implications for our mental autonomy and health, they have a strong impact on our viewing habits and content production. They influence how we perceive, understand, and represent the world and ourselves, and in doing so, shape norms and the broader social fabric.

However, these implications are not limited to media consumption on digital platforms. Increasingly, large amounts of data scraped from online sources are used to develop new technologies. As a result, algorithmic and corporate decisions about what parts of our online activity are considered relevant or redundant, ethical or harmful, become embedded in these datasets and are quietly reproduced in the technologies built on them. This, in turn, may have a significant impact on our future everyday lives and cultural production.

While some of these systems serve important purposes, such as counteracting discriminatory content online, their broader influence often escapes public scrutiny. As they quietly shape our online environments, they also raise urgent ethical questions about accountability, public visibility and whose values are encoded in the technologies we use.

In this workshop, we will explore the basic architecture of these systems and examine their growing influence on visual languages, creative tools, and democratic publics. Together, we will reflect on how they can shape meaning, and conduct a hands-on hacking experiment: CAN WE DECEIVE THESE MACHINES INTO SEEING WHAT WE WANT THEM TO SEE?

Duration: approx. 4 hours

No coding experience or prior technical skills are required. Participants will need a

computer with internet access, a Google Colab account, and, if possible, access to

an Instagram account.

Autor:in
Eleonora Dieterichs


Das Projekt „Digitale Perspektiven in der Kunst“ ist ein Kooperationsprojekt zwischen der HFG- Offenbach, HFMDK- Frankfurt, Kunsthochschule Kassel und Städelschule, gefördert durch das Hessisches Ministerium für Wissenschaft und Forschung, Kunst und Kultur.

Hessisches Ministerium für W issenschaft und Forschung, Kunst und Kultur