AI in iGaming: legal risks and ethical practices

Caro Vallejo
Written by Caro Vallejo

In an exclusive interview with SiGMA News, Beniamino Santoro, Data Protection expert and Privacy Law Lecturer at Ascencia Business School, shares his insights on the evolving landscape of AI and data protection in the iGaming sector. As a leading expert in privacy law and AI ethics, Santoro delves into the complex challenges of balancing security and privacy in AI-driven systems, particularly in identity management and fraud detection. He highlights the importance of regulatory compliance, ethical considerations, and transparency in AI decision-making processes. In this interview, Santoro provides actionable strategies for iGaming operators to ensure compliance and ethical AI use, addressing the pressing need for robust regulation in this rapidly evolving sector.

Evolution of AI and data protection regulations

Beniamino Santoro highlights that regulatory frameworks for AI and data protection have evolved significantly, shifting from broad principles to “targeted, risk-based governance.” established “foundational tenets of transparency and accountability,” but the rapid growth of AI introduced challenges such as “algorithmic bias, opaque decision-making, and privacy risks.” This led to specialized frameworks like the , which categorizes AI systems by risk and mandates strict compliance for high-risk applications in sectors like healthcare and finance. The Act’s penalties, up to €35 million or 7% of global turnover, emphasize accountability.

Globally, regulatory approaches vary significantly. The EU uses a stringent risk-based model, while the US focuses on consumer rights and transparency, and China enforces strict data localization. This creates compliance challenges for multinational organizations. Notably, “ethics-driven governance” has become integral, with standards like “privacy by design” and technologies like federated learning. As enforcement increases, organizations prioritize AI literacy and robust governance to navigate these complexities effectively.

Balancing security and privacy in AI-driven Identity Management

Santoro shared with SiGMA News that centralized data repositories are vulnerable to cyberattacks, and AI can infer sensitive information, undermining data minimization principles. Global regulatory inconsistencies also force organizations to adopt minimal compliance strategies, weakening privacy protections. To address these challenges, Santoro advocates for “agile legislation, global cooperation, and user-centric system design,” alongside stricter regulation for high-risk systems and robust data sovereignty protocols. Emerging technologies like federated learning offer promise but face technical limitations.

AI-driven fraud detection in iGaming: strategies for GDPR adherence

SiGMA News: Fraud detection in payments is crucial for the iGaming industry. How can operators balance AI-driven transaction security without violating GDPR restrictions on automated decision-making?

Beniamino Santoro: Balancing AI-driven fraud detection with GDPR compliance in the iGaming industry necessitates a nuanced approach, particularly in light of restrictions on automated decision-making. Operators can achieve this balance through the following strategies:

1. Avoid solely automated decisions: GDPR prohibits decisions based solely on automated processing that produce significant legal effects. Human oversight is crucial; analysts should review flagged transactions before final decisions are made.

2. Justify automated processing: Automated decision-making is allowable if it satisfies specific conditions: the necessity for contract performance (e.g., fraud prevention), legal authorisation, or explicit user consent. Operators must document and justify their reliance on AI under these criteria.

3. Implement explainable AI: Transparency is vital for GDPR compliance. AI systems must provide interpretable outputs, offering clear explanations for decisions, such as why a transaction was flagged, to ensure fairness and transparency.

4. Enable user rights: Operators must uphold users’ GDPR rights, including access to human intervention, the ability to challenge AI decisions, and transparency about data processing practices.

5. Minimise data usage: Data minimisation principles require operators to collect only essential data for fraud detection. Techniques like pseudonymisation and encryption can further enhance data security.

6. Use risk-based authentication: AI can assign risk scores to transactions without making final decisions. High-risk cases may trigger additional verification steps, ensuring human involvement while bolstering security.

7. Conduct regular audits: Frequent audits are necessary to ensure AI systems remain GDPR-compliant, unbiased, and effective in fraud detection. By adopting these measures, iGaming operators can leverage AI responsibly while safeguarding user privacy and adhering to GDPR’s regulatory framework.

SiGMA News: AI has also been used to detect patterns of compulsive gambling. Are there any legal risks in using AI for this purpose?

Beniamino Santoro: Using AI to detect at-risk gambling behaviour offers benefits such as early intervention but introduces legal risks related to data privacy, regulatory compliance, ethics, and liability.

1. Data privacy and consent: AI systems process vast amounts of personal data, requiring lawful grounds for processing under regulations like the EU’s GDPR. Obtaining valid consent from individuals with gambling addictions is challenging, as their autonomy may be compromised, rendering consent potentially invalid. Misuse of sensitive data, such as biometric information, may result in significant legal liabilities if improperly handled.

2. Regulatory compliance: AI systems for gambling addiction detection must comply with regulations like the EU AI Act, which may classify specific applications as “high-risk” and mandate strict oversight. Operators must also adhere to jurisdiction-specific gambling laws, such as Germany’s requirement for automated addiction detection systems. Non-compliance can lead to penalties or loss of licenses.

3. Transparency and accountability: AI’s “black box” nature complicates the explanation of decision-making processes, raising accountability concerns if harm occurs due to AI-driven decisions (e.g., account suspension). Regulators are increasingly demanding clear documentation to ensure fairness and prevent discriminatory practices.

4. Ethical concerns: AI may be misused for predatory practices, such as targeting vulnerable gamblers with marketing or incentives that exacerbate addiction, creating ethical dilemmas for operators balancing profit motives with responsible gaming obligations.

5. Liability risks: Operators face potential lawsuits if AI systems fail to detect addiction or if AI-driven interventions cause harm, such as wrongful account restrictions or discrimination. To mitigate these risks, operators must implement robust data protection measures, ensure transparency in AI processes, and comply with relevant regulations while prioritising ethical standards.

Privacy challenges with facial recognition in iGaming

Santoro considers that integrating facial recognition technology (FRT) in online gaming for age verification and avatar creation presents “significant privacy challenges.” FRT relies on sensitive biometric data, which, if breached, could lead to identity theft since facial data cannot be altered like passwords. A possible lack of transparency in data handling undermines user trust and may violate regulations like GDPR.

According to Santoro, centralized facial data databases are vulnerable to cyberattacks and exhibit demographic biases, leading to misidentification or exclusion. Collecting facial data without explicit consent violates privacy principles, necessitating compliance with complex biometric regulations. To mitigate these risks, Santoro emphasizes the need for “robust encryption, secure data storage, transparency in data usage, bias-mitigation techniques, explicit consent with opt-out options, and regular privacy audits.”

The integration of AI in iGaming poses challenges in balancing security and privacy. Regulatory frameworks like GDPR and the EU AI Act stress transparency and accountability. Operators must focus on AI literacy, robust governance, and user-centric design to manage these complexities effectively, mainly when using AI to detect problematic gambling or facial recognition.

This article was first published in Spanish on 24 March 2025.

Be Part of the Action! Join the world’s biggest iGaming community with SiGMA’s Top 10 News countdown. Subscribe  for weekly updates, insider insights, and exclusive subscriber-only offers.