Navigating AI regulations in North America: balancing innovation and data sovereignty

Navigating AI regulations in North America: balancing innovation and data sovereignty

AI is rapidly becoming non-optional and irreplaceable in business operations across industries. But as more and more companies harness the power of AI, governments are stepping in and imposing regulations on how such powerful technology should be used. In North America, the regulatory environment is moving fast, particularly on AI ethics and data sovereignty. How will businesses navigate this landscape while continuing to embrace innovation?

In this article, we discuss the regulatory landscape in US and Canada, examining how companies can innovate in AI while remaining compliant with the letter of the law. Stay tuned for future articles looking at different regions.

The US: A fragmented approach to AI regulation

The United States is gradually building a regulatory structure around AI, but it’s still fragmented: While some efforts are happening at the federal level, many AI-related laws are driven by state governments. The lack of uniformity means businesses have to navigate a patchwork of rules, especially when operating across state lines.

The AI Bill of Rights

In 2022, the Biden administration introduced the Blueprint for an AI Bill of Rights, better known as the AI Bill of Rights. While not legally binding, this document sets out ethical guidelines for AI usage and lays the groundwork for what AI regulation might look like in the future. The bill emphasizes five key principles.

1. Safe and effective systems

First and foremost, AI systems must be safe and reliable. Whether diagnosing medical conditions or making financial predictions, these systems need to be tested rigorously and designed with input from diverse communities and experts to identify potential risks and impacts, making sure they perform as intended and don’t cause harm. Additionally, AI mustn’t be used in a way that could potentially put users or their communities at risk. The design process must guard against inappropriate data usage and minimize any harm that could arise from reusing data. Independent assessments should verify that the systems are safe and effective, and the findings should be shared publicly to promote transparency and accountability.

For example, suppose an AI tool in the healthcare industry makes an incorrect diagnosis because it wasn’t trained on diverse datasets. That’s a serious risk for both the business and the patients involved. Businesses using AI in sensitive areas, like healthcare or finance, should prioritize regular audits and continuous monitoring to keep their systems safe and minimize end-user risk.

2. Algorithmic discrimination protections

AI has a known problem with bias, and regulators are taking notice. From hiring tools that unfairly disadvantage minority candidates to financial systems that discriminate against certain demographics, algorithmic discrimination is a real issue. The US is focusing heavily on making sure AI doesn’t perpetuate or create new forms of discrimination.

A study by the Pew Research Center revealed that 62% of respondents believe that AI will have a significant effect on jobholders in the next twenty years, with approximately 71% opposing its use in hiring decisions. At the beginning of 2023, the Equal Employment Opportunity Commission (EEOC) began an initiative to investigate AI-driven hiring practices and provide guidelines to minimize the risk that they could lead to biased outcomes. Businesses using AI need to regularly check their systems for unintended biases and implement transparency measures to avoid potential discrimination.

3. Data privacy

Data privacy is a hot topic, and the AI Bill of Rights emphasizes that users need to be informed about how their data is being collected, stored, and used by AI systems. It’s not just a matter of keeping data safe; it’s about giving users control over their own information. This principle also means unchecked continuous surveillance mustn’t be used without consent, particularly in the areas of work, housing, education, or any context in which continuous surveillance is likely to impinge on rights, opportunities, or access.

Individual states have supplemented this principle with individual privacy regulations that help ensure citizens’ rights to data privacy are protected. For example, The California Consumer Privacy Act (CCPA) has some of the strictest rules in the US when it comes to consumer data privacy, and companies that fail to comply face serious penalties. Businesses using AI to process personal data should invest in strong data governance practices to safeguard compliance with state and federal laws.

4. Notice and explanation

AI systems can be hard to comprehend, and even the people running them don’t fully understand how they work. The AI Bill of Rights pushes for transparency, requiring businesses to provide notice and explanations that communicate both when an AI system is in use and how it makes decisions.

This is particularly vital in areas like health or finance, where the stakes are high. Companies should be sure their AI tools give clear explanations for their decisions to meet both regulatory requirements and maintain consumer trust.

5. Human alternatives and fallbacks

Regardless of how sophisticated AI becomes, it’s important to have human oversight in critical situations. The AI Bill of Rights encourages giving people the option to appeal or override AI decisions, especially when those decisions could have a major impact on their lives.

For example, in financial services, if an AI system denies a loan application, the applicant should have the right to ask for a human to review the decision. This kind of human oversight not only builds trust but also helps businesses stay compliant with regulations.

Updates to the AI Bill of Rights

In October 2023, the White House issued an executive order that updated the AI Bill of Rights and reinforced several of its major points. The order tightens safety and security protocols for AI, requiring companies to report their safety test results to the federal government. It also tasks the National Institute of Standards and Technology with creating strict testing standards to verify that AI systems are reliable before they are released.

Privacy remains a priority, with the order introducing measures to safeguard personal data and reduce the risk of AI misusing sensitive information. The Bill’s update urges Congress to pass bipartisan data privacy laws and supports the development of advanced privacy-preserving technologies like cryptographic tools. In addition, the order focuses on promoting fairness, aiming to stop AI from contributing to discrimination, especially in areas like housing, healthcare, and criminal justice.

Beyond consumer and worker protections, the executive order seeks to boost innovation by providing smaller developers with resources and increasing funding for AI research across the country.

State-level regulations: California leading the charge

While federal guidelines shape AI governance on a large scale, specific states have rapidly scaled up their version of legislation, greatly influencing how AI is implemented. Leading the charge is California.

Since 2018, California has been enforcing the California Consumer Privacy Act (CCPA). This law greatly amplifies consumer privacy protections while imposing rigid rules of data handling on businesses. The fines for failure to follow these rules can rise to $7,500 for each intentional violation, making compliance essential for any business operating within or even just serving California’s market. These penalties are more than just a slap on the wrist. In addition to fines, companies can face serious reputational and financial consequences for non-compliance.

The CCPA doesn’t just offer vague promises to protect personal data. It lays down concrete rights for California residents. They can ask companies exactly what personal information they’ve collected, how it’s used, and even request its deletion. That’s a big deal. And if someone doesn’t want their data sold or shared? They have the right to opt out. Businesses, in turn, can’t refuse these requests or discriminate against anyone exercising their rights. This goes beyond surface-level protections—people can request that their data be corrected if it’s wrong and limit how sensitive data like financial info or precise geolocation is used. These rights aren’t limited to just big companies either; if a business collects data from California residents, it’s bound by the CCPA’s rules.

Beyond California

But California’s not alone. Seventeen states have passed a combined total of 29 bills regulating AI systems, mostly focused on data privacy and accountability. For instance, Virginia and Colorado have rolled out the Virginia Consumer Data Protection Act (VCDPA) and the Colorado Privacy Act (CPA), respectively. These efforts reflect a growing trend of state-level governance filling in the gaps left by slow-moving federal legislation.

States such as Texas and Vermont have even set up advisory councils or task forces to study the impact of AI and propose further regulations. By enacting these laws, states aim to ensure that AI systems not only protect data privacy but also promote fairness and prevent algorithmic discrimination.

These state initiatives, while beneficial to AI regulation, create a complex web of regulations that businesses must keep up with, especially those operating across state lines. Each state’s take on privacy and AI governance varies, making the legal landscape difficult to map. But one thing’s clear: businesses that overlook these rules are setting themselves up for more than just a compliance headache; they’re facing potential lawsuits, fines, and a serious hit to customer trust.

The National AI Initiative Act

While states focus on data privacy, the federal government is investing in AI innovation through the National AI Initiative Act of 2020. This law aims to keep the US competitive in AI development while also addressing ethical issues like data privacy and bias.

Although the focus here is on fostering innovation, it emphasizes that companies need to stay compliant with regulations in highly regulated sectors like healthcare while working on new AI applications. For example, HIPAA (Health Insurance Portability and Accountability Act) sets strict rules for how medical data can be used, which directly impacts AI systems operating in the healthcare space.

Canada: A more unified approach

Canada has taken a more unified approach to AI regulation compared to the US, with a focus on creating a national framework. The proposed Artificial Intelligence and Data Act (AIDA) requires that AI systems are safe, transparent, and fair. It also requires companies to use reliable, unbiased data in their AI models to avoid discrimination and other harmful outcomes. Under AIDA, businesses must conduct thorough risk assessments and ensure their AI systems don’t pose a threat to individuals or society.

Alongside AIDA, Canada also proposes a reform of the Personal Information Protection and Electronic Documents Act (PIPEDA) which governs how businesses handle personal information. When it comes to AI, PIPEDA places strict rules on how data is collected, stored, and used. Under PIPEDA, individuals have the right to know how their personal data is being used, which presents a challenge for companies developing AI models. Businesses need to check that their AI systems are transparent, and that means being able to explain how the system makes decisions and how personal data is involved in those processes.

In June 2022, Canada introduced Bill C-27, which includes three key parts: the Consumer Privacy Protection Act (CPPA), the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act. If passed, the CPPA would replace PIPEDA as the main privacy law for businesses. In September 2023, Minister François-Philippe Champagne announced a voluntary code to guide companies in the responsible development of generative AI systems. This code offers a temporary framework for companies to follow until official regulations are put in place, helping to build public trust in AI technologies.

Gcore: supporting compliance and innovation

Keeping artificial intelligence in step with innovation and compliance is tricky in a continuously shifting regulatory environment. Businesses must keep up to date by monitoring the changes in regulations across states, at the federal level, and even across borders. This means not just understanding these laws but embedding them into every process.

In an environment where the rules are changing from day to day, Gcore supports global AI compliance by offering localized data storage and edge AI inference. This means your data is automatically handled in full accordance with rules specific to any region or field, whether it’s healthcare, finance, or any other highly regulated industry. We understand that compliance and innovation are not mutually exclusive, and can empower your company to excel in both. Get in touch to learn how.

Discover Gcore Inference at the Edge

Navigating AI regulations in North America: balancing innovation and data sovereignty

Subscribe
to our newsletter

Get the latest industry trends, exclusive insights, and Gcore
updates delivered straight to your inbox.