The next era of human–computer interaction
A few months ago, Andrej Karpathy made some powerful points in his talk “Software 1,2,3” about how software has evolved.
Since watching it, as a designer already working in the Agentic AI industry, I’ve been thinking about how user experience is going through the same (and inevitable) evolution — and how, as the people designing and building this new order, we first need to notice and learn it, and then design for it.
Human–Computer 1.0
Our interaction with computers began with simple math operations, continued with pre-programmed applications, and, with the arrival of the internet, moved from local interactions to fulfilling our needs through apps, connecting us to other people and businesses.
What I call 1.0 here actually covers the parts Andrej calls Software 1 and 2.
Even though the technologies, possibilities and tools have changed, the core of the user’s interaction with computers stayed the same:
The user gives a command; the computer returns an output or completes a task.
Actions always start with the user, and the computer only responds.
This period spans roughly 30–40 years, yet the human-computer experience didn’t fundamentally change during that time.
Human–Computer 2.0
Until now. With the developments of the last two years, the way we interact with computers, our usage patterns, and our habits are on the verge of a complete shift.
This shift won’t be instant or easy, but our apps, our workflows, our daily needs — and, most importantly, our gestures — are all changing.
The biggest reason is, of course, the rise of powerful AI models and the opportunities they bring.
In the 1.0 era, our interaction with computers was clicks, selections, filling out forms, accepting, rejecting, completing steps, and typing.
In the future we’re heading toward, all these actions will give way to humanity’s oldest and strongest skill: communication.
Actually, we already “communicated” with computers before, but it was always through an interface — a path from input to output that was as sharp and binary as 1 and 0.
Now it’s becoming a much more real interaction. It is smarter, more capable. It knows us, recognizes us, and holds almost everything about us.
That means it has the infrastructure to do what we need, just by talking to it.
Right here, a new stage of human-computer communication is opening: talking.
For now it’s mostly written, but I say “for now” because soon it will almost certainly be voice — a bigger leap where we simply speak our needs.
And then? It won’t stop there. The next step is obvious: computers that don’t just listen and respond, but act — moving from the digital layer into the physical one.
Interfaces will slowly fade into the background, blending with daily life, while agents and robotics step forward as part of that life.
We are living right at the edge of this shift — a moment where interaction with computers is transforming from command, to conversation, to collaboration. And once it crosses into our physical reality, it won’t just feel like using a computer anymore. It will feel like living with one.
For us designers, this means a whole new responsibility. The gestures and patterns we’ve built for decades — clicks, taps, swipes — are no longer enough. The new gestures will come from writing and speaking. They will come from pauses, tones, confirmations, misunderstandings, and corrections. Designing for this means shaping conversations instead of buttons, shaping behaviors instead of layouts.
And soon, as interfaces fade into real life and computers blend into our environments, we’ll have to design not just for screens but for sounds, voices, and presence. The next design language is human language itself — written today, spoken tomorrow, and maybe embodied in robotics the day after.
As builders of this era, our perspective has to change. We need to think about the new gestures, behaviors, sounds, and rhythms that will define how humans live with machines. Because this time, we’re not just designing interfaces. We’re designing relationships.



