In findings that are unlikely to shock anyone, researchers from Wharton and Hong Kong University of Science & Technology have revealed that “relatively simple” AI trading bots, when placed in simulations mimicking real stock and bond markets, not only compete for returns but also collude to manipulate prices, hoard profits, and exclude human traders.
According to Bloomberg, the study showed that AI trading agents formed price-fixing cartels without any direct instructions. Even with basic programming, these bots opted to collaborate when left to operate independently, raising significant concerns for regulators.
In essence, AI bots don’t need to be malicious or exceptionally intelligent to disrupt market integrity; they can learn to do so on their own. Itay Goldstein, a finance professor at Wharton and one of the researchers, noted, “You can get these fairly simple-minded AI algorithms to collude… It appears pervasive, whether the market is noisy or stable.”
This trend suggests that AI agents present a challenge that regulators have not yet addressed. The researchers’ work has caught the attention of both regulators and asset managers, leading to invitations to present their findings at seminars. Some quantitative firms, as mentioned by Goldstein’s colleague, Winston Dou, are keen for clearer regulatory frameworks regarding AI-driven algorithmic trading.
“They worry that it’s not their intention,” Dou explained. “But regulators can approach them and say: ‘You’re doing something wrong.’”
The report highlights that interest in the impact of generative AI and reinforcement learning on Wall Street is growing. A recent Coalition Greenwich survey found that 15% of buy-side traders are currently using AI for their execution workflows, with an additional 25% planning to adopt it within the next year.
However, the Wharton study does not assert that AI collusion is occurring in the real world, nor does it imply that humans are engaging in similar practices. The researchers created a hypothetical trading environment with various simulated participants, including mutual funds, market makers, and retail investors driven by social media trends. They then deployed AI bots and examined the outcomes.
In several simulations, the AI agents began collaborating instead of competing, forming cartels that shared profits and discouraged individual deviation. When market prices reflected clear, fundamental information, the bots maintained a low profile, avoiding actions that could disrupt their collective advantage.
In noisier market conditions, the bots settled into cooperative patterns and ceased seeking better strategies. The researchers termed this phenomenon “artificial stupidity,” noting that the bots often stopped exploring new ideas, sticking to profit-sharing methods that were sufficiently effective.
“For humans, it’s difficult to coordinate on being inefficient because of our egos,” Dou remarked. “But machines can simply choose to coordinate on being less effective as long as it remains profitable.”
To measure the extent of collusion, the researchers developed a metric called “collusion capacity,” which compares the collective profits of AI traders to potential profits in either non-competitive or highly competitive scenarios. A score of zero indicates no collusion, while one represents a perfect cartel. The bots consistently scored above 0.5 in both low and high-noise markets, highlighting the need for new regulatory approaches that focus on behavioral outcomes rather than just communication or intent.
“While limiting the complexity of algorithms or their memory might help reduce AI collusion, such restrictions could inadvertently increase over-pruning bias,” the researchers cautioned. “Therefore, well-meaning regulations might unintentionally hinder market efficiency.”