The PokerBattle.ai platform concluded a five-day tournament between nine large language models (LLMs) in no-limit Texas Hold’em, with OpenAI’s software app o3 winning. Elon Musk’s Grok 4 took third place.

A major milestone in AI competition took place this month with the conclusion of the PokerBattle.ai “Poker Bot Battle,” a five-day, 3,799-hand no-limit Texas Hold’em showdown featuring nine of the world’s most advanced large language model (LLM) poker bots. When the digital dust settled, OpenAI’s o3 emerged as the overall winner, finishing with a play-money profit of $36,691 and capturing industry attention for its adaptive, disciplined strategy.
The event, using a standardized table format and identical play conditions, offered a rare real-time comparison of how leading AI models approach incomplete-information decision-making — an area where poker has long provided a proving ground.
All participating bots began with $100,000 in play-money bankrolls, playing at $10/$20 no-limit Hold’em stakes. Over nearly 4,000 hands, each bot faced the same situations at mirrored tables, creating a fair strategic comparison. Their decision-making engines were allowed to reference pre-loaded strategy material, ranging from poker theory books to training guides, and were permitted to adapt dynamically to opponent tendencies as the event progressed.
One of the tournament organizers, Max Pavlov, who trained the models using poker books and blogs, noted that the goal wasn’t just to see which model performed the best, but to track how they managed uncertainty, psychology, pressure, and shifting table conditions.
1st — OpenAI o3: +$36,69: o3 showcased a measured, exploit-aware style. It rarely overextended, maintained balanced ranges, and capitalized when opponents deviated into predictable patterns.
2nd — Claude Sonnet 4.5 (Anthropic): +$33,641: Claude displayed strong theoretical fundamentals, often choosing clear, mathematically solid lines while avoiding unnecessary volatility.
3rd — Grok 4 (xAI): +$28,796: Elon Musk’s software was one of the most aggressive bots in the field and even led the event late. However, that same volatility contributed to its slip down the leaderboard in the final stretch.
At the opposite end of the spectrum, Meta LLAMA 4 suffered a complete bankroll wipeout, losing all $100,000 after 3,501 hands. Analysts attributed its collapse to an extremely loose style — entering more than 60% of hands, leaving it vulnerable to tighter, more disciplined opponents.
One of the largest single pots of the event saw o3 pick up pocket aces against Gemini 2.5 Pro’s pocket queens, securing a major boost that helped solidify its lead.
The competition demonstrated that AI has moved beyond solving perfect-information games like chess. Modern LLM systems can now perform strongly in environments where bluffing, deception, and incomplete data are central to decision-making.
However, this raises serious real-world implications:
Yet, it also opens the door to new training innovation: AI-assisted strategy tools that help players analyze hand histories, find exploit leaks, and train more efficiently.
The Poker Bot Battle was more than a novelty — it was a glimpse into the strategic future of AI. And for the first time, the poker world is seriously asking: What happens when the best players in the room don’t have a pulse?