AI’s fast emergence has left many authorities scrambling to quickly develop a regulatory framework capable of accounting for all potential uses and the data it scrapes to function. The European Union enacted the AI Act to set strict guidelines on ethics relating to the use of AI. While industries and governments alike engage in addressing both the opportunities and challenges availed by AI, there is an increasing need for a very strong regulatory system. The law aims to reduce AI risks while advancing innovation and protecting fundamental rights. Businesses must understand these regulations as they adapt to the changing landscape.
In this article we take a closer look at the key regulations, explain their implications, and offer actionable steps businesses can take to remain compliant without stifling AI innovation.
What Is the EU AI Act?
The EU has taken a hard stance with its regulatory framework. The EU AI Act came into force in 2024 and will become effective in 2026. The gap allows time for governments, businesses, and other entities to prepare for implementation. It’s also possible that the Act will evolve before its full implementation, as some regulations are not yet finalized.
The new bill will complement the GDPR, which already exerts significant pressure on businesses processing personal data. The Act primarily targets providers (developers) of high-risk AI systems, including those looking to market or deploy systems in the EU. This obligation extends to third-country providers if their system outputs are utilized in the EU. Users (deployers) of these systems are also accountable but face fewer responsibilities than providers.
This framework reflects the growing realization that widespread adoption of AI must be accompanied by robust regulatory measures to mitigate potential risks without stifling technological progress. In fact, the global AI market is expected to reach a value of $184 billion in 2024, with an annual projected growth rate of close to 30%. The AI Act seeks to guide this expansion responsibly, ensuring both innovation and public safety are prioritized.
The AI Act categorizes systems into three tiers: unacceptable risk, high risk, and limited or minimal risk. Let’s take a look at each.
Unacceptable Risk
AI systems that pose a threat to people’s safety or fundamental rights will be banned. These prohibited systems include technologies used for social scoring, manipulative AI designed to distort behavior, indiscriminate government surveillance, and systems that exploit vulnerabilities based on age, disability, or socio-economic status.
High Risk
All AI applications that can significantly affect the rights or safety of individuals fall into this category. Providers of products that utilize high-risk AI systems will be responsible for establishing a suitable risk management system, complying with data governance requirements, providing compliance documentation, enabling human oversight, and achieving an adequate level of accuracy and cybersecurity.
The AI Act provides a list of a number of high-risk uses, including biometric identification, AI for critical infrastructure services, educational applications, employment tools for recruitment and performance evaluation, public service eligibility assessment, and law enforcement systems for evaluating criminal behavior risks. High risk also includes all AI applications that can significantly impact individuals’ rights or safety.
Limited or Minimal Risk
General-purpose AI systems with minimal potential harm, such as chatbots, will be subject to fewer regulatory demands, though some basic compliance obligations may still apply. The Act sets specific requirements for general-purpose AI models, mandating that providers create technical documentation and disclose information about the content used for training. Those that pose systemic risks must undergo model evaluations, track serious incidents, and ensure robust cybersecurity measures are in place.
GDPR and AI: data protection challenges
Since its inception in 2018, the General Data Protection Regulation has rapidly emerged as a benchmark in the arena of data protection within the EU and beyond. The regulation stipulates how businesses collect, process, and store personal data, and adherence is crucial for developers of AI systems. Noncompliance can trigger substantial financial penalties and loss of reputation.
Central to the GDPR is the principle of data minimization. The principle posits that the amount of data collected by a company should be no more than what is necessary for some defined purposes. This is quite a challenging proposition for AI, considering extended datasets are generally crucial for adequate machine learning. Organizations have to make a conscious point of collecting only the smallest amounts of personal data and place more emphasis on anonymization and pseudonymization to enhance user privacy and minimize legal consequences.
Another important aspect of the GDPR is the “right to be forgotten.” It ensures that at all costs, individuals can request the erasure of personal data, including any that might have been used to train AI models. This presents a challenge to AI developers: deleting data not only from active databases but also from the backup systems. As the volume of data deletion requests increases, this requirement becomes even more complex, particularly for legacy systems lacking the capability for easy data removal.
Organizations should be prepared to handle various requests from individuals regarding access to data, data correction, and objections to data processing. This gets even more complicated when the AI system uses distributed data processing. Everything needs to be transparent and users need clarity on how their data influences AI decisions.
The global impact of GDPR
While the GDPR does not explicitly regulate the transfer of personal data outside the EU, it imposes strict conditions on processing and transfers. For global organizations, this is especially relevant. They must implement adequate safeguards for such transfers. The 2020 Schrems II ruling complicated data transfer to the United States, leading to heightened scrutiny and a reevaluation of transfer strategies.
While today, the GDPR is only a significant benchmark for AI companies in the EU, its tenets are fast turning into the norm all over the world. Organizations will have to be aware not only of what regulations are in place at any given time but also of any changes in the future that may impact their operations. Integrating privacy and security into the design of AI systems engenders trust and minimizes the risk of future fines due to possible non-compliance. Non-compliance with these regulations comes with significant penalties: fines can be as high as €10 million or 2% of global revenue.
National differences in AI regulations
While the EU AI Act aims to unify AI regulations across Europe, individual member states can introduce their own guidelines, and Germany is one of the most proactive. Germany’s Data Protection Conference (DSK) issued specific guidance focused on Large Language Models (LLMs) and other AI systems. These rules are stricter than the EU-wide framework, emphasizing compliance with GDPR, particularly around data privacy and transparency.
The German guidance on AI demands that companies using AI to process personal data, especially in sensitive fields such as health and HR, must comply with legal requirements and offer transparency. Users must have the right to trace data usage and refuse the use of their personal data for AI training. Automated decisions that significantly affect individuals must involve human oversight to avoid violating GDPR Article 22. Businesses will also have to conduct Data Protection Impact Assessments (DPIAs) and involve Data Protection Officers (DPOs) to help ensure AI systems are accurate, accountable, and free from bias.
For companies operating in multiple EU countries, these national variations mean compliance requires a tailored approach. Germany’s focus on privacy and oversight highlights the need for companies to stay vigilant and consult legal experts to navigate both EU-wide and country-specific AI regulations.
Turning compliance challenges into opportunity with Gcore
Although compliance with the AI Act and the GDPR may appear overwhelming at first, they also entails something more valuable for a business: a chance to lead the way on transparency, fairness, and ethics within AI practices while turning themselves into leaders of responsible AI innovation. Compliance with the EU’s stringent regulations could become a competitive differentiator, signaling to consumers and partners that the business prioritizes ethical and safe AI practices.
Businesses can simplify the compliance process by partnering with service providers that offer tailored solutions for AI data management. For example, Gcore offers a suite of sovereign cloud solutions designed to help businesses seamlessly navigate the complex EU regulatory environment, including for AI. By leveraging localized data centers provided by Gcore, businesses can keep their data within the EU, adhering to the GDPR and the forthcoming EU AI Act. For globally operating companies, Gcore’s presence in over 95 countries makes compliance simple. We’d love to tell you more.