Building AI Agents - How to build a Business Case
How do you define an AI Agent? AI Agents, even though software, are not only about replacing existing software tools to do a job. They are about re-designing the roles and responsibilities of a job itself - this involves re-designing both the software and knowledge work done by a human with the software. This definition of AI Agents is key to evaluating it’s opportunity. Lets break it down further.
Software
Majority AI Agents in news today, however, are integrating with rather than displacing existing software tools. These integrations serve the purpose of fetching data from the source of record and then applying intelligence on top, rather than replacing the source for now. Examples of source of records would be Salesforce for GTM, ServiceNow for customer support, Datadog for observability, Terraform for IaC and so on. Do we expect this to work in the long term? Will customers continue paying for the system of record, and separately for the AI Agent on top?
Knowledge work
While cloud-native software targeted IT budgets ONLY, AI Agents target a combination of IT budgets and knowledge worker & professional services budgets. For reference, knowledge worker’s salaries and professional services budgets combined are at-least a magnitude higher relative to IT software budgets (i.e., in USD trillions). Let’s say, you’re building an AI agent for code security remediation. In this case, the market opportunity isn’t limited to the subscription cost of existing AppSec tools. It also includes the labor and hidden costs—namely, the engineering headcount and developer hours required to triage, fix, and merge the issues surfaced by those tools. An AI agent that automates remediation addresses both the tool cost and operational overhead.
What lessons can we draw from building and selling cloud-native software?
One of the biggest technology waves before AI was cloud computing, which disrupted how software was built and delivered to business users. The cloud computing wave delivered software innovations across each layer of the stack - infrastructure (e.g., AWS, GCP), domain specific platforms across network, security, data (e.g., Cloudflare, Zscaler, Datadog), and of course applications (e.g., Github, Salesforce, ServiceNow).
There is also a lot to learn by comparing and contrasting AI Agents with cloud-native software, including looking at cloud-native software’s entitlement, GTM, and cost drivers.
Entitlement: Targeted IT budgets
GTM and pricing: steal on-prem share, grow volume, and grow the pie itself. Per seat subscription (if used by a human) or usage-based metering (if used programmatically by other machines).
Defining features and cost drivers: Managed software delivered via browser app, desktop app, or CLI; a cloud-native operational model (automatic remote updates, 24x7 support, availability SLAs), and multiple deployment modes for varying networking, security, and governance environments.
So how does it compare to the opportunity size of an AI Agent?
Entitlement: The easiest way to think about it is as a combination of the pre-AI tool cost and the labor cost of the person using the tool. How much the AI Agent chews into existing tools and labor costs will lie on a spectrum and depends on the use-case. For example:
an AI agent for market research doesn’t yet replace the existing tools where you store information (Google drive), but replaces the labor $ spent on hiring or outsourcing research tasks.
an AI agent for customer support replaces the online support tools, but also reduces the number of human support agents needed to achieve the same unit of output as before.
an AI agent for technical documentation replaces both the tools and the technical writer hours needed to use the tool.
Pricing: This is a hotly debated topic within Silicon Valley circles. The breakout use-cases like Coding Agents continue to use cloud-native pricing model (a combination of per seat subscription combined with usage-based metering). Given the steep infrastructure marginal cost of running agents, anything but usage-based pricing will bleed companies red. So I am not yet sold on outcomes based pricing models. This will of-course evolve in the coming years as inference compute gets cheap and new agentic usage patterns breakout.
Defining features and cost drivers: The cloud-native delivery and operational model remains the same for AI Agents. Building AI Agents is much like designing any other complex distributed system. AI Agents need an interface (desktop app, browser app, or a device) and scale well with a cloud-native operational model. Sure, the software infrastructure itself is evolving (with new components being model APIs, frameworks for evals and inference serving, observability and so on), but they still fit into the distributed system engineering discipline.
What are some tension points that you need to be opinionated about?
If your AI Agent relies on data that exists in other software today, which data sources can you displace versus which ones you must integrate with? The ones you integrate with, can you displace them in the future if you are successful with the MVP?
If you are designing an AI Agent to be used by a persona, are you going to project shrinkage in the # personas worldwide in the coming years? How will the role and responsibilities of the persona going to change with AI? Is it likely to grow? Is it likely to merge into other roles? Is the role going to disappear?
To summarize, you will need to work back from what the future of the job you are designing for will look like. And then you can evaluate it’s entitlement, pricing opportunities, and cost drivers by using existing benchmarks from cloud-native software today.