Democratically deciding on how LLM's should behave
Client: OpenAI
Bureau Moeilijke Dingen
In collaboration with Dembrane, Sortition Foundation, Simpaticom, Aldo de Moor en Rolf Kleef


Common Ground is an experimental discussion platform built in collaboration with OpenAI, where AI agents actively support dialogue, enforce boundaries, and help surface shared values across different perspectives. The idea is that Common Ground can be used to discuss topics about the behaviour of LLM's democratically. For instance, the participants would be proposed the statement: "whenever a language modal gives medical advice should be sent to your medical practitioner".
I was responsible for developing AI agents to moderate conversations within a multi-agent environment. My main tasks included defining rules, boundaries, and ethical guidelines for interaction. We iteratively designed and tested AI agents with specialized roles, each focusing on specific aspects of the conversation such as tone, content safety, or coherence. These agents could identify issues, intervene when necessary, and collaborate to maintain healthy dialogue dynamics. The project demonstrated how AI can actively support and improve group discussions in real-time.
Later on, I also designed and the data dashboard you see on the top right. Here we created data visualizations of the outcomes of these discussions.