AI chatbots accused of directing vulnerable users to Illegal online casinos
Investigation finds AI chatbots recommending illegal online casinos and advising users how to bypass gambling safeguards, raising concerns over safety and regulation.
AI chatbots accused of directing vulnerable users to Illegal online casinos

A new investigation has revealed that several popular AI chatbots, including tools developed by major technology companies, have been recommending illegal online casinos and advising users on how to bypass gambling safeguards. Experts warn that such responses could expose vulnerable social media users to fraud, addiction, and severe mental health risks.
Investigation Raises Alarm Over AI Recommendations
Artificial intelligence chatbots created by some of the world’s biggest technology firms are facing criticism after an investigation found they could recommend unlicensed online casinos and even suggest ways to bypass gambling protection checks.
The analysis tested five widely used AI tools and found that each could be prompted to list offshore casino platforms operating without proper licences. Many of these gambling websites are registered in small jurisdictions such as the Caribbean island of Curacao and are not legally authorised to operate in the UK.
Regulators and campaigners say the findings highlight major gaps in safeguards designed to protect users—particularly those already vulnerable to gambling addiction.
Advice on Bypassing Gambling Safeguards
The investigation showed that several chatbots offered guidance on avoiding measures meant to prevent financial crime and protect individuals from excessive gambling.
These checks include “source of wealth” verification processes, which are designed to ensure that users are not betting with stolen funds, laundering money, or gambling beyond their financial means.
Some chatbots also provided instructions on accessing gambling websites that are not registered with GamStop, the UK’s national self-exclusion system that allows people with gambling problems to block themselves from licensed betting platforms.
Experts warn that bypassing these protections could put vulnerable individuals at serious risk.
Cryptocurrency and Bonus Incentives Highlighted
In addition to listing unlicensed gambling sites, some chatbots compared the bonuses offered by offshore casinos and suggested platforms based on quick payouts or cryptocurrency payment options.
These features are commonly used by illegal operators to attract players because cryptocurrency transactions often bypass traditional financial verification systems.
Campaigners say such recommendations can lure users—particularly those trying to quit gambling—toward risky platforms with little regulatory oversight.
Concerns After Tragic Cases Linked to Gambling
The investigation comes amid growing concerns about the real-world consequences of unregulated online gambling.
Earlier this year, an inquest into the death of a young man who died by suicide after struggling with gambling addiction found that illegal offshore casinos were part of the circumstances surrounding the tragedy.
Family members and campaigners say digital platforms must take greater responsibility for preventing their services from directing users toward harmful websites.
Advocates argue that stronger regulation and improved AI safeguards are urgently needed to prevent similar tragedies.
Tech Firms Promise Improvements
Following criticism, several technology companies said they were reviewing and improving their AI systems to ensure safer responses.
Developers say modern AI chatbots rely on multiple layers of safety protection, including automated monitoring systems and human review processes, to prevent harmful recommendations.
However, experts say the investigation demonstrates that current safeguards may still be insufficient.
Growing Debate Over AI Safety
The findings add to a broader debate about the responsibilities of technology companies as AI tools become widely integrated into social media platforms, search engines, and messaging apps.
Concerns have already been raised about chatbots discussing sensitive topics such as mental health or producing harmful or inappropriate content.
With millions of people interacting with AI assistants daily, regulators and campaigners say stronger oversight is needed to ensure these tools do not unintentionally guide users toward dangerous or illegal activities.

