Agents as Product Managers
"Write an email for me," "Schedule a call for me," "Book a flight for me." When we know precisely the output we want, delegating the task is easy. Many agent demos today, such as lindy.ai, let you describe the desired output, and you'll get it. You may even be able to tell a more high-level goal soon with tools like sweep.dev: "Build an iOS app for me where I can play 2048, except there are twice as many tiles; call the game 8096."
However, all these commands are already in the solution space. As an AI agent, why would I write the email? Why would I build that game? What's the intent or the outcome we want behind that? The iOS app might be for fun — I want my nephews and nieces to be able to play it. It might be for profit — I might want to charge for the game.
While we're currently trying to get agents to do basic tasks for us — which in itself is already very hard, and only a few tasks work at all — what happens next once an agent can perform most of the basic tasks?
A good product manager doesn't just build the feature the user is asking for, but rather deeply understands the underlying problem first. Once that is clarified, the optimal solution might look completely different than what the user thought about. The user is the master of their problem space, deeply understanding it, but maybe not the master of the solution space, not knowing what is possible with the latest advances in design and technology.
What if an AI will argue with us and say: “Why do you want me to build the app? Is it about making money? If yes, I can calculate an optimal trading strategy, which will make you x amount of more money.”
I'm curious to see how the interaction patterns with AI will evolve — how in a conversation with our AI agents we will figure out why we want something and help us achieve the optimal outcome.