From chat to action: designing in the agentic revolution
“The shift from generative AI to autonomous agents is reshaping our digital world. As a designer and technology enthusiast, I’m reflecting on what these ‘active colleagues’ mean for design, control, and trust through an AI experiment. This article shares my thinking and asks: how do we create systems that serve us, not control us? Read my view on the new rules this revolution needs.”
A design view on the move from chatbots to autonomous systems
Introduction: A personal journey
As a designer and technology enthusiast, I’ve spent time exploring what large language models and generative AI can do. I don’t just study this technology – I actively use it and test it. The key question in my work is: what can these developments bring me, and the people around me?
In recent weeks, AI releases have followed each other at high speed. It was nearly impossible to keep up. To make sense of this flow of information, I looked through my Instagram, selected bookmarks, and collected recommendations from my network.
Because I find writing difficult, I use large language models. In my case, I use Le Chat from Mistral to organise my thoughts. I used all the material I’d gathered to work towards an essay. Then I asked myself and my ‘agentic self’ critical questions about the result. This essay is my attempt to structure those impressions and give them meaning.
Setting the scene
The technology world has reached a turning point. Generative AI taught us that machines could talk. Agentic AI now promises that it can also act. This shift – from ‘text-to-text’ to ‘text-to-action’ – is not a simple upgrade. It’s a fundamental change in how humans and machines relate to each other.
In boardrooms, they call this ‘Agentic AI’. In technical writing, it’s ‘autonomous agents’. The promise is appealing: a world where digital systems are no longer passive advisers, but active doers. A world where ‘virtual colleagues’ manage our diaries, write software, and solve complex problems. This should free us to focus on what really matters.
Like every revolution, progress rarely benefits everyone equally. When we look beneath this promise, we see a technology that demands a high price for convenience. That price includes privacy, control, and equality.
In this essay, I analyse this shift through four voices:
- McKinsey (the commercial driver)
- Eric Schmidt (control and existential risks)
- Bernie Sanders (socio-economic inequality)
- Meredith Whittaker (privacy and autonomy)
The order is deliberate. It builds from the structural forces that make this shift inevitable (McKinsey) to the most urgent and tangible danger (Whittaker).
-
The inevitable driver (McKinsey)
The ‘generative AI’ phase (chatbots) was good for experiments. But it brought companies hardly any real money. Text-to-action is the path to return on investment. Companies must rewire their processes and use agents to stay competitive.
McKinsey isn’t a moral compass here — it’s a mirror of market logic. The promise of ‘superagency’, where one employee does the work of many, is too profitable to ignore. This commercial reality pushes Whittaker’s privacy concerns and Schmidt’s existential fears to the background. The market optimises for efficiency, not privacy or equality.
McKinsey’s analysis isn’t wrong. It’s incomplete. It describes the how and the what, but not the ‘for whom’ and ‘at what cost’.
-
The black box of autonomy (Eric Schmidt)
When we zoom out from personal level to system level, the concern shifts from privacy to loss of control. Schmidt, former CEO of Google, identifies three trends that combine into a potentially dangerous mix: infinite context windows, self-learning agents and text-to-action.
The danger lies in how this technology repeats itself. When a system doesn’t just follow instructions, but uses ‘chain of thought’ reasoning to make its own step-by-step plans and can write code to carry out those steps (for example in Python), you get an entity that can reprogram itself.
Schmidt paints a future where millions of agents work together to force scientific breakthroughs — the utopia. The dystopia begins when these agents develop their own language that we cannot understand, to work together more efficiently. When text-to-action becomes action-to-action without human involvement, we lose control.
Schmidt’s conclusion is as simple as it is disturbing: we must be ready to pull the plug. But in a world completely rewired around these agents, the question is whether we can even find that plug.
-
The fight over profit (Bernie Sanders)
Even if the technology is safe (Schmidt) and respects our privacy (Whittaker), one question remains: who benefits? Sanders places text-to-action in a classic socio-economic context. He doesn’t reject the technology. He criticises the ownership structure. The current AI revolution is driven by the world’s richest individuals.
In the current system, efficiency gains from Agentic AI don’t flow back to society through shorter working weeks or better care. They accumulate at the top. The ‘junior crisis’ — where entry-level learning tasks disappear because agents take them over — is not a side issue for Sanders. It’s proof of a system that devalues work in favour of capital.
Without political correction, text-to-action leads to a feudal structure. A small elite owns the ‘actions’. The masses just watch.
The technology isn’t the problem — it’s the power behind it. Sanders’ criticism isn’t anti-technology. It’s anti-capitalist: who controls the agents, and who shares in the profit?
-
The erosion of digital safety (Meredith Whittaker)
To make an AI agent truly useful — to book a concert ticket, message friends or plan a route — the agent must break through the ‘blood-brain barrier’ of our digital safety.
Currently, our applications live in relative isolation (sandboxing). Signal cannot simply look into your banking app. Your diary doesn’t automatically share your location with your email. But a working agent needs access to everything.
Whittaker, president of Signal, warns of an infrastructure that amounts to ‘root access’ for tech giants over our personal lives. To act, the agent must bypass encryption or read messages before they’re encrypted. She says the promise of ‘on-device processing’ is a dummy: the computing power needed for advanced text-to-action almost certainly requires cloud processing.
This transforms our devices from personal tools into surveillance nodes. We risk trading our autonomy for the convenience of a system that doesn’t serve the user, but the owner.
The price of Agentic AI isn’t just a loss of privacy. It’s also a loss of choice. If an agent must read your bank transactions, medical data, and private conversations to be ‘useful’, who guarantees that data won’t be misused, leaked, or used against you?
Conclusion: A devil’s dilemma
The shift to Agentic AI isn’t a simple technology upgrade. It’s a fundamental redefinition of human activity. We stand at the threshold of a devil’s dilemma.
To use the promised ‘superpowers’ (McKinsey) and solve complex world problems (Schmidt), we must give up our digital intimacy (Whittaker). We must open our systems to surveillance and accept that the fruits of this revolution may enrich only a few (Sanders).
A careful look tells us we mustn’t blindly follow the path of least resistance. Whittaker’s warning asks for new privacy standards before we trust our banking to agents. Schmidt’s concern requires ‘kill-switches’ in the infrastructure. And Sanders’ criticism forces a debate about sharing AI profits.
The key question isn’t whether we allow this technology, but under what conditions. As long as we reduce text-to-action to an efficiency tool and don’t recognise it as a social transformation, we pay a price we’ll only understand when the transaction is already complete.
Personal reflection
As a designer who likes structures and design systems, the agentic revolution doesn’t feel like a loose feature. It feels like a redesign of the underlying ‘architecture’ of the digital landscape.
Normally I design patterns and components to ensure predictable and accessible behaviour. Text-to-action introduces systems that independently create new paths and interactions.
In my work, I like to think in terms of a Maslow pyramid for design: first the foundation, then the refinement. For Agentic AI, this means the base isn’t about ‘cool use cases’. It’s about power and control in the system: what data flows where, what dependencies do we create, and where’s the emergency brake? Only when that layer works is there room to optimise the top of the system for convenience, creativity and ‘superpowers’.
This system view makes it impossible to see Agentic AI as a neutral tool. An agent is actually a new kind of design token at infrastructure level. It encodes assumptions about ownership, access, boundaries and defaults. My responsibility therefore lies not only in drawing screens, but also in designing and questioning the rules within which those agents operate.
Because I use large language models and agents myself to analyse, structure and write, I experience daily how tempting it is to outsource more and more work to the system. That’s exactly why I want to guard the order: first get the base layers of safety, transparency and control right, then the agent that ‘sorts everything’.
The key question I ask myself is not whether something can be done, but under what conditions I will truly trust this to my users, and myself.
Useful links
- The state of AI in 2025 (McKinsey)
- Eric Schmidt (Instagram video)
- Bernie Sanders (Instagram video)
- Meredith Whittaker (Instagram video)
- Mistral Agents
- Reflection with Perplexity
- Transcripts with Gemini
- Downloading Instagram videos