By Yashoda Fezah, General Manager, Compliance Administration and Support Services Ltd (CASS)
As Artificial Intelligence (AI) continues to advance, data privacy risks have come to the fore. With AI systems relying on large amounts of personal data for training, the collection, processing and storage of such data has become a key concern.
Privacy risks of AI
In particular, organisations must be cognisant of the following risks that raise key questions around the right to privacy and the potential to abuse AI technologies:
- Data breaches and unauthorised access: With so much data being collected and processed, there is a risk that it could fall into the wrong hands, either through hacking or other security breaches.
- Surveillance and monitoring: As surveillance systems become more widespread, the use of biometric data, be it fingerprints, facial recognition, or iris scans, is becoming common. This raises serious issues as biometric data, once compromised, creates persistent privacy concerns.
- Lack of transparency and consent: AI systems may collect data quietly, without notification or consent, which can lead to serious privacy breaches. These covert techniques often go unnoticed by users, raising ethical concerns about transparency and consent.
The business case for ethical AI
There is a clear business case for ethical AI as failing to secure data and respect privacy laws can lead to financial losses and damage your organisation’s reputation.
The majority of the penalties and fines issued for AI thus far have been in the European Union (EU) as authorities have cracked down on the processing of data by AI systems under the General Data Protection Regulation (GDPR). Indeed, the processing of personal information brings many AI systems directly under the scope of GDPR, which has shaped data regulations worldwide. With the EU AI Act also introducing heavy penalties – up to 7% of annual worldwide turnover – it is crucial that organisations take note of how to remain compliant. In the past two years alone, OpenAI has faced lawsuits in several EU member states—including Austria, France, Germany, Italy, Spain, and Poland—under the GDPR. These lawsuits have been triggered by concerns including its authorisation for processing personal data and potential to generate inaccurate content about specific individuals that could result in privacy or reputational damages.
In addition, a recent survey demonstrated the importance of values such as ethical AI practices, with 52% of workers surveyed by Blue Beyond Consulting and Future Workplace saying they would quit their job if company values were not consistent with theirs. Thus, while most companies start AI ethics programmes under external pressure, as these programmes become part of the organisational culture, employees themselves supply the motivation.
Regulatory response to AI
During the sixth meeting of the Trade and Technology Council (TTC) in April 2024, the EU and the United States (US) emphasised their joint “commitment to a risk-based approach to artificial intelligence” that prioritises transparency and safety. These dialogues may demonstrate shared values across the Atlantic, but each jurisdiction is still at a different point in the regulatory process. While the EU has already enacted the GDPR and AI Act, the US currently relies on voluntary guiding principles like the NIST AI Risk Management Framework and Blueprint for an AI Bill of Rights.
The EU takes a pioneering stance:
The GDPR uses a risk-based approach to regulate the use of personal data in general, rather than specifically within AI systems, but it includes principles that must be complied by anyone developing or using AI systems that process personal data.
The GDPR uses heavy penalties – up to €20 mn or 4% of worldwide revenue — whichever is higher, to ensure businesses take data privacy seriously. Since the law took effect in 2018, over 1,100 fines have been issued, and the totals keep going up. In April 2023, the European Data Protection Board established a task force to harmonise potential enforcement actions against ChatGPT under the GDPR.
Going beyond the GDPR, the EU has just passed a comprehensive AI law, the EU AI Act, which imposes significant compliance obligations and hefty fines. One of its unique features, not seen in US legislation, is a complete ban on certain “prohibited AI practices” that materially distort peoples’ behaviour or raise serious surveillance concerns in democratic societies. In addition to setting forth prohibited practices, the EU AI Act designates a list of high-risk AI practices. This includes, but is not limited to, use of AI in employment decisions, credit scores, insurance and access to services.
Like the GDPR, the EU AI Act imposes significant fines. This can be up to €35 mn or 7% of total worldwide revenue, whichever is higher, for engaging in prohibited AI practices, and up to €15 mn or 3% of the total worldwide annual turnover, whichever is higher, for other violations.
The US proceeding slowly but surely:
In October 2023, the White House issued an Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, one of the first binding actions for federal agencies specifically tailored to AI. In addition to the Executive Order, the White House released a nonbinding Blueprint for an AI Bill of Rights in 2022 with five principles to govern the development and deployment of AI in both the public and private sectors.
Finally, in July 2024, the US Department of State released a “Risk Management Profile for Artificial Intelligence and Human Rights” as a practical guide for organisations—including governments, the private sector, and civil society—to design, develop, deploy, use, and govern AI in a manner consistent with respect for international human rights.
What should businesses and policymakers focus on?
As a mitigant, AI algorithms should be designed to minimise the collection and processing of personal data and ensure that the data is kept secure and confidential. This process is assisted by creating an ethical framework for AI systems:
- Identify ethical principles: Establish the core ethical principles that will guide your AI practices, such as fairness, transparency, and accountability.
- Involve stakeholders: Engage a diverse group of stakeholders, including business leaders, AI developers, and customers, so that the framework reflects a wide range of perspectives.
- Set clear guidelines: Develop clear guidelines for the ethical use of AI, covering everything from data privacy to bias mitigation.
- Align framework with goals and standards: The ethical AI framework should align with your goals and industry standards to help your AI practices be both effective and responsible.
Transparent data practices, strong regulations, and a focus on accountability are key to ensuring that technological progress supports privacy and does not compromise security.
Ultimately, as AI continues to evolve, balancing innovation with data privacy will demand collaboration among businesses, policymakers, and individuals.