Friday, June 13, 2025
Google search engine
HomeBusinessWhy the EU’s Artificial Intelligence law matters for Mauritius

Why the EU’s Artificial Intelligence law matters for Mauritius

By Yashoda Fezah, General Manager, CASS

The European Union has unveiled the world’s first comprehensive Artificial Intelligence law. After reshaping global data privacy with GDPR, Brussels has now set its sights on artificial intelligence with the EU Artificial Intelligence (AI) Act. It came into force on 1 August 2024 and introduces sweeping regulation that will fundamentally reshape how artificial intelligence operates worldwide, not just within European borders, but also in Mauritius. 

Risk classifications: Know where you stand

This groundbreaking legislation establishes a clever risk-based system for classifying AI technologies within or impacting the EU market. AI systems are analysed based on how they’re used and the potential risks they pose to everyday users. The higher the risk, the stricter the compliance requirements.

  • Unacceptable Risk: Simply put, these AI applications are banned outright. Building a social scoring system to analyse individual’s behaviours, actions, and characteristics based on their digital footprints? Planning to deploy real-time facial recognition in public spaces for mass surveillance? The EU’s response is a firm “absolutely not.” 
  • High Risk: The EU’s framework identifies two categories of high-risk AI demanding strict oversight: AI embedded in products already regulated under safety legislation (from children’s toys to medical devices), and AI systems with significant impact on critical domains like infrastructure, education, essential services, law enforcement, and legal interpretation. These require rigorous pre-market and conformity assessments, transparency, and human oversight.
  • Limited Risk: Transparency takes centre stage here. If you’re developing chatbots, generating content with AI, you’ll need to clearly tell users they’re interacting with or viewing artificial content, ensuring people aren’t unwittingly conversing with machines while thinking they’re chatting with humans (Informed consent).
  • Minimal Risk: Basic AI applications like spam filters or recommendation algorithms fall under this category. These do not require regulatory restrictions.

Seem straightforward? Unfortunately, real-world applications rarely fit neatly into these boxes. The line between “high” and “limited” risk remains blurry in many sectors, creating classification challenges that will keep legal and compliance departments busy for years to come.

Extraterritorial impact: What it means for Mauritius

The EU AI Act is extraterritorial, meaning AI developers and organisations using AI to generate output that will be used in the EU fall under its scope. Mauritius-based companies providing services to EU customers should prioritise compliance with the Act. For example, company service providers using GenAI for minutes writing for EU customers or tax advisors using GenAI for drafting tax opinions will need to adhere to these regulations.

Local banks using AI agents for credit scoring might have no physical presence in Europe. But if its algorithm assesses creditworthiness for European customers, both GDPR and the AI Act apply. The compliance obligations don’t stop at geographical boundaries. A software provider selling AI-powered HR tools to EU clients must meet both AI Act and GDPR transparency standards.

As compliance and regulatory experts, we can expect the “Brussels effect” to come into play, where EU regulations influence global standards. Even organisations without direct EU dealings but with presence in jurisdictions that have significant EU relationships may soon face compliance requirements.

Mauritius stands at a regulatory crossroads. AI laws can be expected here in the future.  Companies already working on compliance will undoubtedly have a head start when Mauritian legislation arrives, giving them a clear advantage over competitors.

Global Reach: Why every business should pay attention

If GDPR taught us anything, it’s that European regulations rarely stay confined to European borders. The AI Act’s reach extends to any business in the world whose AI systems impact EU citizens or markets.

The stakes couldn’t be higher. Non-compliance penalties can reach a staggering €35 million or 7% of global annual turnover, higher than GDPR’s cap of €20 million or 4% of global annual turnover by comparison. Beyond the financial risks, there’s the very real prospect of being locked out of the European market entirely. For global enterprises, this represents more than a compliance challenge – it constitutes an existential threat to business continuity.

Your AI Compliance roadmap – A step-by-step guide

Step 1: Conduct an AI Risk & Data Privacy Assessment

Start by cataloging all your AI systems – even experimental algorithms and third-party tools. Evaluate each: Do they affect individuals? How transparent are they? Do they process GDPR-regulated data? Your existing GDPR compliance work offers a valuable foundation that can fast-track your AI Act preparations.

Step 2: Implement required safeguards

For high-risk systems, prepare for substantial investment in compliance infrastructure. You will need:

  • Bias testing frameworks reflecting real-world usage – Algorithms must be trained on truly representative datasets. The fairness principle that underpins both GDPR and the AI Act requires rigorous testing across diverse populations.
  • Human oversight – Simply having a person rubber-stamping AI decisions won’t suffice. Human operators must understand the system’s limitations and possess genuine authority to overrule automated outcomes. 
  • Robust documentation and auditability – Beyond technical specifications, regulators want to see the journey of risk identification, assessment and mitigation. Your documentation should demonstrate thoughtful engagement with potential harms, not just checkbox compliance.

Step 3: Prepare for conformity assessments

Many high-risk systems will require third-party compliance checks and DPIAs, external conformity assessments and external audits beyond self-assessment, similarly to GDPR’s requirements. Self-assessment alone won’t be enough. 

Our advice? Don’t wait for external audits to discover compliance gaps. Establish internal pre-assessment protocols that mimic regulatory reviews. A critical internal evaluation is far less painful than a regulatory finding of non-compliance.

Turning compliance into competitive advantage

Here’s a perspective shift worth considering: What if AI compliance isn’t merely a burden, but an opportunity hiding in regulatory clothing?

Forward-thinking businesses are already recognising that robust AI governance creates tangible market advantages. In a world increasingly wary of “black box” algorithms, demonstrable commitment to transparent, ethical AI becomes a powerful differentiator.

Companies that integrate AI Act and GDPR compliance into unified governance frameworks aren’t just avoiding penalties – they’re building trust capital with customers, regulators and investors alike.  The EU AI Act is setting a global standard, just as GDPR reshaped data privacy worldwide.

The EU AI Act isn’t just changing compliance requirements – it’s redefining what responsible AI development means in practice. Those who embrace this shift proactively won’t just survive the regulatory transition; they’ll thrive in the new landscape of accountable, human-centred artificial intelligence.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
WIA Initiative

Most Popular

Recent Comments