What's next for enterprise agents?
Enterprises want to love agentic AI, they really do, but three years after ChatGPT blew onto the scene, companies are still struggling to find value. Agents are the latest manifestation of 'describe it and it will come,' yet like all enterprise automation efforts over the last couple of decades, it's easier said than done.
AI-driven automation and agentic systems require more system-level sophistication and integration than most companies have been able to achieve so far. Coding appears to be the most successful early agentic implementation, yet even there, it requires crystal-clear, highly detailed prompts to make it work.
At MetLife, CIO Nick Nadgauda says they’ve seen significant productivity gains by using AI-first tools to reduce monotonous manual work like ongoing code maintenance and modernization. But he points out that the prompts themselves can become so intricate that it starts to feel like a new programming layer on top of the code.
“When you look at how things work and you look at the level of complexity of the prompts, the amount of detail in the prompts, and the amount of back and forth, it’s almost like coding in a different language,” Nadgauda told FastForward.
Beyond coding, he says MetLife is seeing success with AI operating semi‑autonomously inside operations tasks like call center and claims flows, not just as an on‑demand tool for developers. These systems don’t really “reason” the way vendors talk about agentic AI, but they execute well‑defined steps more autonomously, pushing deeper into automation without crossing into full agency just yet.
Automating the automation
If Nadgauda is focused on making more reliable automated workflows, Salesforce is already thinking about the next step for agentic AI, “ambient intelligence” where agents are woven into the fabric of the software itself, waking up and acting based on context rather than explicit prompts.
Silvio Savarese, chief scientist at Salesforce, believes the way companies use agents is about to shift. "Most agents have been fundamentally active. They operate based on a prompt with specific instructions," he said. "But very soon, and already now, we are seeing that agents are being seamlessly integrated, embedded in the background."
It's a compelling vision for a SaaS company like Salesforce because it doesn't rely on the customer to create the agent, but for the company to create interactions that happen automatically in the natural flow of using the software.
One example the company gave might be a salesperson on a call. The agent would recognize the customer, the past relationship and the nature of the call, and deliver relevant context to the salesperson in real time, information they might not otherwise have at their fingertips.
But that kind of intervention could also cut both ways. An agent surfacing talking points mid-conversation risks interfering with the salesperson's attention and natural conversational flow. Not only could they sound like they are reading the answer, they can't always be sure the agent is delivering accurate information and have little time to double-check the accuracy of the live prompts.
Savarese admits that finding the right balance is still a work in progress. How much information do you put in front of a person while they're working, and when does helpful become intrusive? "I think that it is important for us to go through a relentless cycle with users to understand exactly what's the best way for users to use these tools," Savarese said.
That iterative approach may be prudent because the stakes go well beyond user experience. Moving from prompt-driven agents to something more autonomous sounds compelling, but the real test is whether they operate autonomously without creating more problems than they solve. That's especially true given the issues companies are still having implementing prompted agents.