What Is An AI Agent?

It seems like every major tech company is now all-in on AI agents. But good luck actually understanding what they mean by “AI agent.”

Google, Microsoft, Salesforce, and others are laying on the hype for their AI agent products, but nobody appears to have the same definition of what an AI agent actually is. 

(Not to mention, the definitions they do provide can sometimes be confusing, contradictory, or outright misleading.)

That’s a problem. Because AI agents are going to be a very big deal—and they will directly impact every knowledge worker out there today.

If you don’t know what agents actually are, you can’t plan for their impact on your career and company. And you can’t actually vet the technology that all of these vendors are selling.

We want to help correct that. So, we’ve put together a post on what AI agents really are and what you need to know to make smart decisions about them. 

It’s informed by a conversation I had recently with Marketing AI Institute founder and CEO Paul Roetzer on Episode 124 of The Artificial Intelligence Show and quotes from that discussion.

The Growing Confusion Around AI Agents

Why is there so much confusion around what AI agents actually are and what they actually do?

Much of the problem lies in the conflicting and sometimes misleading definitions provided by major tech companies.

Microsoft kicks off its definition by calling out “autonomous agents” right within the headline of its post on AI agents, implying its agents can perform tasks without any human involvement. 

Later in the post, however, they provide helpful context that, unfortunately, contradicts the headline, writing that agents can be everything from “simple prompt-and-response” to “fully autonomous.”

Salesforce says its Agentforce platform gives you the ability to create “autonomous AI agents,” which again implies that the agent works without human involvement. However, the company then indicates on the same page that users must define the agent’s role, connect data sources, define its actions, set guardrails for it, and take other manual actions to make the agent work.

“That sounds like a lot of human involvement and oversight to me for something that’s supposed to be autonomous, so you can understand where the confusion comes in,” says Roetzer.

Some companies do a better job of exploring the nuances of AI agents. Google does a pretty good job of not claiming outright autonomy, says Roetzer. In his keynote announcing AI agents at Google I/O 2024, CEO Sundar Pichai said the company’s AI agents “are able to ‘think’ multiple steps ahead, and work across software and systems, all to get something done on your behalf, and most importantly, under your supervision.”

Dharmesh Shah at HubSpot did a good job in his keynote at his company’s conference, INBOUND, at not over-promising autonomy as well, says Roetzer.

“He described it as software that uses AI and tools to accomplish a goal requiring multiple steps. And he specifically said, some agents can have the ability to run autonomously, some have executive planning capabilities, but those are niceties, not necessities to be an AI agent.”

Are you confused yet?

You’re not alone.

It can sometimes feel like the term “AI agent” can mean, well, whatever a company selling one wants it to mean.

A Clearer Definition of AI Agents

So let’s just get rid of the confusion and actually offer a clearer definition of AI agents.

“The simple definition I have historically used,” says Roetzer, “is that an AI agent takes actions to achieve goals.”

Today’s large language models like ChatGPT, Gemini, Claude, and others are neither action-oriented or autonomous. They are not AI agents at all. Instead, they just create outputs by predicting tokens and words.

To begin to display agentic behavior, a system first needs to be able to take actions. It needs to be able to go through steps or complete a workflow.

But, just because an agent can take actions doesn’t mean it does so autonomously without human involvement.

And that’s where a lot of the confusion comes in, says Roetzer.

If an agent were truly autonomous, you would give it a goal—and it would plan and execute that goal with no human inputs or oversight.

One example of this type of autonomous agentic behavior is actually Google DeepMind’s famous AlphaGo system. AlphaGo was provided with training data on how to win at the game of Go, then worked on its own to achieve that goal.

“It’s just basically told to win the game. It does all the planning, it figures out how to do it analyzes its own moves, it thinks 10, 20, 100 steps ahead of what the human may do,” says Roetzer. “And so that was kind of like the traditional idea of an agent.”

That is decidedly not how the AI agents we’re hearing about today work. Just because an agent can take a series of actions does not make it autonomous. 

“We’ve been seeing brands talking about their agents as autonomous when they are not,” says Roetzer. “They’re not even close to autonomous.”

Someone still has to set goals for the agent. Someone still has to plan how the agent functions. Someone still has to monitor the agent’s performance. Someone still has to tell it how to improve.

Today, what we’re really talking about when we talk about “AI agents” is a system that may, in some cases, be able to perform some actions autonomously. Not a system that can independently achieve a goal for you without your involvement by taking action.

And, there are still varying degrees of autonomy. Something isn’t either autonomous or not. It may be autonomous by degrees or only in certain areas.

“We’re basically using this AI agent term to encompass every form of agent that can take an action,” says Roetzer. “But there’s like a dozen characteristics that will all vary depending on the kind of agent you’re interacting with.”

How to Evaluate—and React to—AI Agents

It’s not a bad thing that today’s AI agents can’t take actions to achieve goals in a fully autonomous manner. It’s still early. This technology is progressing rapidly—and it’s going to make a huge impact once it does start displaying these capabilities. 

Which means you shouldn’t wait to start exploring AI agents. It just means you should be clear-eyed about how you evaluate the AI agents currently on the market, and their potential impact on your company and career.

One useful way to evaluate AI agents out there today is by using Marketing AI Institute’s Human-to-Machine Scale pictured below.

What Is An AI Agent?

The Human-to-Machine Scale borrows the approach that the Society of Automotive Engineers (SAE) uses to evaluate the various levels of driving automation

The Human-to-Machine Scale categorizes five levels of possible automation for AI systems, from Level 0—which is all human, all the time—to Level 4, which is all machine, or full autonomy where the system can perform at or above human level without inputs or oversight. At Level 4, the human simply defines the desired outcome and the machine does all the work. 

Looking at the scale, it becomes quite clear that AI agents today are nowhere near Level 4. In fact, most are likely, at best, around Level 1 or 2. Using the scale makes it much easier to clearly evaluate just how autonomous an AI agent may be. 

It can also help you frame your reaction to AI agents.

As we hear all the hype around AI agents, it’s easy to get worried about how a fully autonomous machine that performs actions to achieve goals could be used to replace your work.

But, again, looking at the scale, it’s very clear we’re nowhere near that—no matter what hype or headlines you read.

“If you hear about AI agents and you think, oh my gosh, they’re taking my job next year, that is not happening,” says Roetzer.

In fact, the very lack of autonomy in today’s AI agents instead represents a huge opportunity for knowledge workers. 

“If you realize all the things that have to go into making an agent work, goal setting, planning, building it, monitoring it, improving it, that is almost always the human’s job right now,” says Roetzer. 

“So I would actually be looking at this as the opposite of being threatened by them. The ability to build these agents, which mostly won’t require coding ability, is a massive superpower.”

Many of the most valuable things we do in our companies and careers require multiple steps and are repetitive, data-driven processes. Eventually, we will have AI agents to do these things.

“You can be the one that figures out how to build those things.”

Related articles

Introductory time-series forecasting with torch

This is the first post in a series introducing time-series forecasting with torch. It does assume some prior...

Does GPT-4 Pass the Turing Test?

Large language models (LLMs) such as GPT-4 are considered technological marvels capable of passing the Turing test successfully....