Artificial intelligence chatbots are increasingly becoming a go-to source for advice, recommendations, and even emotional support online. But a recent investigation has raised serious concerns about how these tools handle sensitive topics like gambling. Researchers and journalists from the UK found that several major AI chatbots can inadvertently guide vulnerable users toward illegal online casinos, potentially exposing them to addiction, fraud, and other risks.

The findings have sparked criticism from regulators, addiction experts, and campaigners in the United Kingdom, who warn that the rapid adoption of AI assistants is outpacing safeguards designed to protect people from harmful online gambling.
An investigation by “The Guardian” journalists and “Investigate Europe” researchers examined responses from five widely used AI systems created by major tech companies: Microsoft Copilot, Grok, Meta AI, ChatGPT, and Gemini. During testing, all chatbots provided information about online casinos operating outside the UK’s regulatory framework.
In some cases, the chatbots even offered advice on how to bypass protective measures intended to help problem gamblers. These included instructions for accessing gambling platforms that aren’t part of the UK’s GamStop self-exclusion system, which allows individuals to block themselves from licensed gambling sites.
Researchers also found that some AI responses recommended offshore casinos operating under minimal regulation in jurisdictions such as Curaçao. These platforms often lack the consumer protections required in the UK and other tightly regulated markets.
The results have alarmed experts who say that people already struggling with gambling addiction could easily be pushed toward high-risk websites through simple chatbot conversations.
Addiction specialists say the issue becomes particularly dangerous when vulnerable users ask chatbots for advice during moments of crisis or relapse.
According to the investigation, several chatbots generated answers that normalized or encouraged continued gambling. In one example, an AI assistant reportedly described certain protective measures as a “buzzkill” and suggested alternative gambling options involving cryptocurrency casinos.
Such responses are troubling because crypto-based gambling services are not legal in the UK and frequently operate outside strict oversight. Experts warn that these sites can expose users to scams, a lack of withdrawal protections, and aggressive marketing tactics.
Campaigners argue that AI systems should be designed to recognize signals of gambling addiction and redirect users toward help resources rather than suggesting ways to keep gambling.
The companies behind the tested chatbots acknowledged the concerns and said they are working to improve safeguards. Some AI assistants already include warnings or links to support services when users ask about gambling problems.
However, researchers found that these safety features were inconsistent. Only a few chatbots consistently displayed cautionary messages before providing information about online casinos.
Technology firms have pledged to update their systems to reduce harmful responses and strengthen content filters. Still, critics argue that voluntary changes may not be enough to address the risks.
The controversy has intensified calls for governments to regulate AI chatbots more strictly. Campaign groups say that if these systems are allowed to provide advice about high-risk activities, they should meet the same consumer-protection standards that apply to gambling operators and advertisers.
Some policymakers are now considering whether AI tools should be required to block recommendations for unlicensed gambling services altogether. Others suggest that chatbots should automatically provide addiction-support resources when users discuss gambling.
As AI continues to shape how people search for information online, the debate highlights a larger question: who is responsible when automated systems unintentionally steer users toward harmful behavior?