
AI is rapidly becoming non-optional and irreplaceable in business operations across industries. But as more and more companies harness the power of AI, governments are stepping in and imposing regulations on how such powerful technology should be used. In North America, the regulatory environment is moving fast, particularly on AI ethics and data sovereignty. How will businesses navigate this landscape while continuing to embrace innovation?
In this article, we discuss the regulatory landscape in US and Canada, examining how companies can innovate in AI while remaining compliant with the letter of the law. Stay tuned for future articles looking at different regions.
The US: A fragmented approach to AI regulation
The United States is gradually building a regulatory structure around AI, but it’s still fragmented: Efforts are taking place both at the federal and state levels, with state governments driving many AI-related laws. This patchwork of rules poses challenges for businesses operating across state lines, as they must navigate varying compliance requirements.
The repeal of the AI Bill of Rights
On January 21, 2025, the Biden administration announced the repeal of the Blueprint for an AI Bill of Rights. Originally introduced in 2022, this document outlined ethical guidelines for AI usage and set the stage for future regulation. While not legally binding, the blueprint had emphasized principles such as safe and effective systems, protections against algorithmic discrimination, data privacy, transparency, and human oversight.
The repeal reflects a shift in regulatory priorities and has raised questions about the future of AI governance in the US. Critics argue that removing the blueprint leaves a gap in ethical guidance for AI development and deployment, while proponents claim it lacked enforceability and failed to address the fast-evolving AI landscape.
Despite its repeal, the AI Bill of Rights still influences ongoing state-level legislation and industry best practices. Businesses should remain aware of these principles, as they are likely to inform future regulatory efforts.
Key principles: what businesses can retain from the AI Bill of Rights
Although the blueprint is no longer in effect, its foundational ideas continue to resonate as they were in effect during a formative period for AI. Businesses can still use these principles to align their AI strategies with emerging ethical standards:
- Safe and effective systems: Businesses should continue to prioritize safety and reliability in AI. Testing systems rigorously, involving diverse stakeholders in development, and conducting independent audits remain essential for mitigating risks. This is particularly critical in sensitive industries like healthcare and finance.
- Algorithmic discrimination protections: Bias in AI systems is a pressing issue. The repeal doesn’t negate existing regulatory scrutiny, such as the Equal Employment Opportunity Commission’s (EEOC) initiative on AI hiring practices. Companies must proactively monitor and address bias to avoid reputational and legal risks.
- Data privacy: With the repeal, states like California will likely take a more prominent role in shaping data privacy standards.
- Transparency: Transparency remains vital for building trust. Even without federal guidance, industries using AI should aim to provide clear explanations of AI decisions, particularly in high-stakes areas like healthcare and financial services.
Human oversight: The principle of maintaining human alternatives to AI decisions is widely regarded as a best practice. Businesses should continue to implement mechanisms for human review and appeals to maintain consumer confidence and regulatory alignment.
State-level regulations: California leading the charge
While federal guidelines shape AI governance on a large scale, specific states have rapidly scaled up their version of legislation, greatly influencing how AI is implemented. Leading the charge is California.
Since 2018, California has been enforcing the California Consumer Privacy Act (CCPA). This law greatly amplifies consumer privacy protections while imposing rigid rules of data handling on businesses. The fines for failure to follow these rules can rise to $7,500 for each intentional violation, making compliance essential for any business operating within or even just serving California’s market. These penalties are more than just a slap on the wrist. In addition to fines, companies can face serious reputational and financial consequences for non-compliance.
The CCPA doesn’t just offer vague promises to protect personal data. It lays down concrete rights for California residents. They can ask companies exactly what personal information they’ve collected, how it’s used, and even request its deletion. That’s a big deal. And if someone doesn’t want their data sold or shared? They have the right to opt out. Businesses, in turn, can’t refuse these requests or discriminate against anyone exercising their rights. This goes beyond surface-level protections—people can request that their data be corrected if it’s wrong and limit how sensitive data like financial info or precise geolocation is used. These rights aren’t limited to just big companies either; if a business collects data from California residents, it’s bound by the CCPA’s rules.
Beyond California
But California’s not alone. Seventeen states have passed a combined total of 29 bills regulating AI systems, mostly focused on data privacy and accountability. For instance, Virginia and Colorado have rolled out the Virginia Consumer Data Protection Act (VCDPA) and the Colorado Privacy Act (CPA), respectively. These efforts reflect a growing trend of state-level governance filling in the gaps left by slow-moving federal legislation.
States such as Texas and Vermont have even set up advisory councils or task forces to study the impact of AI and propose further regulations. By enacting these laws, states aim to ensure that AI systems not only protect data privacy but also promote fairness and prevent algorithmic discrimination.
These state initiatives, while beneficial to AI regulation, create a complex web of regulations that businesses must keep up with, especially those operating across state lines. Each state’s take on privacy and AI governance varies, making the legal landscape difficult to map. But one thing’s clear: businesses that overlook these rules are setting themselves up for more than just a compliance headache; they’re facing potential lawsuits, fines, and a serious hit to customer trust.
Canada: A more unified approach
Canada has taken a more unified approach to AI regulation compared to the US, with a focus on creating a national framework. The proposed Artificial Intelligence and Data Act (AIDA) requires that AI systems are safe, transparent, and fair. It also requires companies to use reliable, unbiased data in their AI models to avoid discrimination and other harmful outcomes. Under AIDA, businesses must conduct thorough risk assessments and ensure their AI systems don’t pose a threat to individuals or society.
Alongside AIDA, Canada also proposes a reform of the Personal Information Protection and Electronic Documents Act (PIPEDA) which governs how businesses handle personal information. When it comes to AI, PIPEDA places strict rules on how data is collected, stored, and used. Under PIPEDA, individuals have the right to know how their personal data is being used, which presents a challenge for companies developing AI models. Businesses need to check that their AI systems are transparent, and that means being able to explain how the system makes decisions and how personal data is involved in those processes.
In June 2022, Canada introduced Bill C-27, which includes three key parts: the Consumer Privacy Protection Act (CPPA), the Personal Information and Data Protection Tribunal Act, and the Artificial Intelligence and Data Act. If passed, the CPPA would replace PIPEDA as the main privacy law for businesses. In September 2023, Minister François-Philippe Champagne announced a voluntary code to guide companies in the responsible development of generative AI systems. This code offers a temporary framework for companies to follow until official regulations are put in place, helping to build public trust in AI technologies.
Gcore: supporting compliance and innovation
Keeping artificial intelligence in step with innovation and compliance is tricky in a continuously shifting regulatory environment. Businesses must keep up to date by monitoring the changes in regulations across states, at the federal level, and even across borders. This means not just understanding these laws but embedding them into every process.
In an environment where the rules are changing from day to day, Gcore supports global AI compliance by offering localized data storage and edge AI inference. This means your data is automatically handled in full accordance with rules specific to any region or field, whether it’s healthcare, finance, or any other highly regulated industry. We understand that compliance and innovation are not mutually exclusive, and can empower your company to excel in both. Get in touch to learn how.