More humanlike behaviors emerged in a series of 30-agent simulations. Although all the agents started with the same personality and same overall goal—to create an efficient village and protect the community against attacks from other in-game creatures—they spontaneously developed specialized roles within the community, without any prompting. They diversified into roles such as builder, defender, trader, and explorer. Once an agent had started to specialize, its in-game actions began to reflect its new role. For example, an artist spent more time picking flowers, farmers gathered seeds, and guards built more fences.
“We were surprised to see that if you put [in] the right kind of brain, they can have really emergent behavior,” says Yang. “That’s what we expect humans to have, but don’t expect machines to have.”
Yang’s team also tested whether agents could follow community-wide rules. They introduced a world with basic tax laws and allowed agents to vote for changes to the in-game taxation system. Agents prompted to be pro- or anti-tax were able to influence the behavior of other agents around them, to the extent that they would then vote to reduce or raise tax depending on who they had interacted with.
The team scaled up, pushing the number of agents in each simulation to the maximum the Minecraft server could handle without glitching—up to 1,000 at once in some cases. In one of Altera’s 500-agent simulations, they watched how the agents spontaneously came up with and then spread cultural memes (such as a fondness for pranking, or an interest in eco-related issues) among their fellow agents. The team also seeded a small group of agents to try to spread the (parody) religion Pastafarianism around different towns and rural areas that made up the in-game world, and watched as these Pastafarian priests converted many of the agents they interacted with. The converts went on to spread Pastafarianism (the word of the Church of the Flying Spaghetti Monster) to nearby towns in the game world.
The way the agents acted might seem eerily lifelike, but their behavior combines patterns learned by the LLMs from human-created data with Altera’s system, which translates those patterns into context-aware actions, like picking up a tool or interacting with another agent. “The takeaway is that LLMs have a sophisticated enough model of human social dynamics [to] mirror these human behaviors,” says Altera cofounder Andrew Ahn.
In other words, the data makes them excellent mimics of human behavior, but they are in no way “alive.”
But Yang has grander plans. While Altera plans to expand into Roblox next, Yang hopes to eventually move beyond game worlds altogether. Ultimately, his goal is a world in which humans don’t just play alongside AI characters but also interact with them in their day-to-day lives. His dream is to create a vast number of “digital humans” who actually care for us and will work with us to help us solve problems, as well as keep us entertained. “We want to build agents that can really love humans (like dogs love humans, for example),” he says.
This viewpoint—that AI could love us—is pretty controversial in the field, with many experts arguing it’s not possible to re-create emotions in machines using current techniques. AI veteran Julian Togelius, for example, who runs the games testing company Modl.ai, says he likes Altera’s work, particularly because it lets us study human behavior in simulation.