P005 → Dis-assembling Synthetic Agents
Participants: Hendrik Bender, Marcus Burkhardt, Daniel Heideloff, Bogdan-Andrei Lungu, Anna Lena Menne, Nuoyi Wang, Pauline Reitzer, Bernd Rößler, Simon Waidner


Finding stable paths on shaky grounds


Working with generative models, creation becomes speculative to the user. Outputs, in their essence, are defined by probabilities and become unpredictable. A perceived loss of control over technology is uncomfortable and challenges its integration into working practices that demand stability. Gaining back a sense of control requires strategies to navigate the latent spaces of generative AI models and define as well as restrict outcomes.  Custom GPTs, integrating LLMs, give us a sense of control over working practices by enabling a division of tasks, isolation of risks, and definition of specific goals. As Sam Altman puts it, “the GPTs are a first step towards a future of agents” (Sam Altman, 2023, OpenAI DevDay). However, although Agents and Multi-Agent systems come with a big commercial narrative, promising to accomplish tasks on our behalf (OpenAI, 2025), it remains unclear what exactly “agents” are.   As such, our group set out to engage the recent shift toward agentic AI and the proliferation of synthetic agents, aiming to explore methods to investigate the varieties of synthetic agents, their anatomy, as well as interrelationality within increasingly synthesized human-machine-worlds. 

As a group, we collaboratively decided to compare the front-end interface layer of Custom GPT creation with the system prompt backend layer, which structures how various agents operate. While the everyday user encounters a simplified interface that allows entering instructions and toggling features, they are unable to access the system prompt governing the GPT system's configuration and, consequently, the model’s behavior.  We began by selecting a coherent thematic area: romance-related Custom GPTs. From the larger dataset, we identified ten GPTs falling into three broad categories: 1) Attractiveness evaluators, 2) Advice providers, and 3) Partner simulators. Of these ten, we were able to jailbreak six system prompts, leaving us with two subgroups: three Custom GPTs centred on attractiveness and three entered on romantic advice. These six prompts formed the core of our exploration and analysis.  By working across both the front-end layer (Walking through the interface of agent creation) and the backend layer (analyzing the Custom GPT system prompts), we set out to investigate how the two layers are entangled, aligned, or misaligned. As such, we were able to examine both the affordances and the limits of customization: how interaction with the interface suggests flexibility and uniqueness, and how the underlying system enforces a stable, universal scaffold. 


Creating AI Agents: An Interface Analysis of Agent Builders



Method

Data collection is employed through task-oriented Walkthrough Method on two cases of agent development tools - GPT Builder from OpenAI and Coze from ByteDance by exploring interfaces, functionalities and workflows step-by-step. We created customized agents which act as a romantic partner, angry neighbour or weather adviser and analyzed how the interface and AI are communicating with each other. We documented the Walkthrough by creating screenshots for every new step, taking notes of the main arguments from the group discussion, and visualizing the entire workflow.


Building Elements







OpenAIs GPTs


Building Blocks






Byte Dance - Coze


Building Blocks






Anatomy of a Prompt






System Prompts Comparison






Block Barchart of Overall Agent Capabilitites







Venn Diagram of Agent Capabilities’ Distribution






Recurrent Textual Components of CustomGPT Prompts

  

Wordtree of the use of “I” and “You” in CustomGPT Prompts