AI Regulations are Changing; Sovereign Cloud Helps Businesses Comply

AI Regulations are Changing; Sovereign Cloud Helps Businesses Comply

AI has taken the world by storm, moving so fast in recent years that regulators simply couldn’t keep up. Concerns about how training data was being sourced and whether inference data was being stored raised major ethical concerns, with critics sounding concerns about AI becoming the technological Wild West.

Fortunately, regulators weren’t asleep on the job, and in 2024, we’ve seen a flurry of new and updated policies that seek to ensure companies use AI ethically. These regulations, which vary somewhat from one region to another, set new standards for companies that train AI models, or run inference, on sensitive information like personally identifiable and health data. This can be a real headache for companies that started using AI before regulations came into force, or those looking to use AI for the first time and feel roadblocked by the complexities of different regional requirements.

A sovereign cloud approach based on dedicated private data centers or edge computing can solve this compliance requirement by ensuring training data and models are geographically bound. Let’s look at some specific current regional regulations that affect AI and explore how sovereign cloud approaches can help your business comply, no matter where your training and inference take place.

What’s the Current AI Regulation Landscape?

The first iterations of today’s AI regulatory policies only covered traditional databases that stored data as-is. LLMs weren’t affected by these regulations because they technically don’t store their training data, but rather generate a model that’s essentially a very lossy compression of this data. This meant regulators had to update their policies to close the loophole LLMs created. It’s these regulations that dominate the AI landscape today and affect every business that uses AI.

And that goes for training and inference—if you have even one user in a specific region, you need to be aware of that region’s regulations. Likewise, if you train your ML model in one region but deploy it elsewhere, it’s important to take note of the regulations in both the training and deployment locales.

Let’s take brief a look at how the EU, US, China, and India have addressed the LLM loophole and what legislation your business needs to comply with to run AI in these regions.

European Union

In 2018, the EU introduced the General Data Protection Regulation (GDPR), which requires companies to store data on EU citizens inside the borders of an EU member country. With the AI Act of 2024, the EU established new regulations for AI and updated existing ones, like GDPR, to also cover AI models if their training data came from EU citizens.

United States

The US passed the CLOUD Act in 2018, which allows its federal government to force US companies to reveal all their company data, regardless of their global storage location. In 2023, the president published an executive order mandating AI providers to watermark content created from copyrighted material and creators to mark their content as AI-generated.

China

Like the GDPR, China’s Cybersecurity Law of 2017 requires that the most important data be stored on servers in mainland China. The draft for China’s latest AI law also includes the right to know which AI process generated specific data and the right to refuse AI-based decisions in contexts where humans had previously made these decisions.

India

In 2023, India enacted the Digital Personal Data Protection (DPDP) Act, which regulates the use of Indian citizens’ data similarly to the GDPR. However, it doesn’t matter if data storage occurs within India; it has to be applied regardless of the storage location. While India doesn’t have comprehensive AI legislation yet, the government requires significant IT enterprises to seek approval before deploying LLMs to the Indian market.

How Does a Sovereign Cloud Approach Help Ensure Compliance?

Sovereign cloud approaches include dedicated private clouds and edge computing. Let’s look at how each can help your business stay compliant with AI regulations.

Private Cloud

A private cloud is a computing environment dedicated to a single organization. It offers cloud infrastructure hosted either on-premises or by a third party and offers top levels of control, security, and customization.

The private cloud approach ensures compliance by building a dedicated cloud inside a specific jurisdiction that collects, stores, and processes all data related to the country of its deployment.

Figure 1: Private cloud approach

The downside of this approach is the huge upfront costs, as the dedicated data centers must be built and maintained inside a foreign country.

Edge Computing

Edge computing is a distributed computing model that brings processing power and data storage closer to the devices or locations where it’s needed, reducing latency and improving performance for real-time applications. It’s almost always provided by a specialist third party.

This strategy supports compliance by dynamically routing requests of specific jurisdictions to the nearest edge location. An edge location inside the EU will process a request from the EU, so the collection, storage, and processing happen according to the GDPR. Since edge computing is outsourced, your business doesn’t need to worry about processing happening in the right location. Your provider takes care of that so your regulatory compliance occurs behind the scenes.

Figure 2: Edge computing approach

This approach doesn’t have the huge upfront costs of a dedicated private cloud, but the decentralized nature of the edge can require a significant reengineering of existing software.

As AI regulations continue to evolve globally, we predict that more countries will likely follow the US and EU’s lead to protect their residents and help companies use AI responsibly.

In the EU, we will likely see stricter regulations to ensure foreign companies comply with EU rules. However, there is still no unified approach to AI governance across EU member states, meaning AI-related laws may vary within the region until a comprehensive framework is established and ratified.

Meanwhile, the US is taking a more flexible approach, prioritizing the growth of its AI economy. The government appears focused on preserving access to AI technology and the data it generates.

Another emerging trend is the push for better standardization of AI tools and processes. For example, the new ISO/IEC 42001:2023 standard could serve as a benchmark for future regulations, requiring businesses to adhere to these standards to ensure compliance.

New Solutions for New Regulatory Requirements

Most major markets now regulate how and where AI data is stored and processed. These new circumstances require simple and affordable solutions for businesses. Edge computing seems to be a promising option, offering a new approach to sovereign cloud that allows every organization to store and process data in its country of origin without making infrastructure investments.

Gcore Inference at the Edge simplifies AI regulatory compliance by processing data on a network of 180+ strategically distributed points of presence around the world. No matter where you’re training your model or where your customers are located, we can keep you compliant with ever-changing regional regulations so you can focus on your core business. Get in touch for a complimentary consultation.

Simplify your AI regulatory compliance with Gcore

AI Regulations are Changing; Sovereign Cloud Helps Businesses Comply

Subscribe
to our newsletter

Get the latest industry trends, exclusive insights, and Gcore
updates delivered straight to your inbox.