EU, U.S. and the Canadian FlagEU, U.S. and the Canadian Flag

The AI Regulation Tug of War: Canada Struggles to Find Balance

March 26, 2025

Canada is caught in the middle between two competing approaches to regulating Artificial Intelligence (AI). On one side is the United States, advocating for innovation with minimal constraints, and on the other is the European Union, which is championing risk-based regulations. These two competing visions makes charting a path forward a challenge for Canadian policymakers.

Canada’s past efforts to establish a regulatory framework for AI have faced substantial hurdles. After years of slow and often torturous progress through the federal legislative system, the Artificial Intelligence and Data Act (AIDA), introduced in 2022, is now on ice for the foreseeable future. The bill ultimately stalled in parliamentary committee, unable to secure the consensus needed to move forward. This stalemate, further complicated by the proroguing of parliament in late 2024, has effectively frozen AIDA in its tracks. As of writing, it is unclear what vision the current federal leaders have for AI regulation.

When Canada introduced its national AI strategy in 2017—often cited as the world's first—the country seemed poised to take a leadership role in AI innovation and governance. Today, that early advantage has largely evaporated. With AIDA stalled indefinitely and two competing regulatory models emerging from the U.S. and EU, Canada stands in the middle of a tug of war that will shape the future of the innovation economy, and how CPAs should think about their approach to integrating AI into their work.

The Paris AI Action Summit: Diverging Global Approaches to AI

The February 2025 Paris AI Action Summit marked a clear divergence in approach to AI governance. While 60 countries, including Canada, signed the summit's "Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet", the United States and United Kingdom notably did not.

This break was a stark departure from previous summits. The inaugural UK AI Safety Summit in November 2023 had produced the Bletchley Declaration, where all 28 countries agreed on the need for international cooperation on AI safety. The Seoul AI Summit in May 2024 continued this safety-first focus with further consensus on commitments to guardrails for AI development.

But by the time the Paris summit began, the world had changed. When JD Vance, the Vice President of the United States took the podium, the sea change was clear: "We believe that excessive regulation of the AI sector could kill a transformative industry just as it's taking off... The AI future is not going to be won by hand-wringing about safety."

In January 2025, the Trump administration issued an Executive Order that replaced the previous administration's directive on AI oversight. Where the previous approach emphasized guardrails and AI safety, the new administration is prioritizing innovation and competitiveness. This approach relies largely on industry self-regulation, is focused on removing barriers to AI development, and emphasizes maintaining competitive advantage in the global AI race.

Meanwhile, the European Union's Artificial Intelligence Act (EU AIA), came into effect in August 2024, establishing the world's first binding AI-specific legislation. Unlike the U.S. approach, the EU strategy values stronger regulatory oversight, with robust data and privacy protections and a risk-based approach to AI use. The legislation completely prohibits certain "unacceptable risk" applications (like social scoring systems, predictive policing, or emotional recognition in the workplace), imposes strict requirements on "high-risk" AI systems, emphasizes user protection, transparency, and accountability, and requires extensive testing, documentation, and human oversight. The contrast between these two models reflects a broader tension in AI governance: the delicate balance between technological advancement and regulatory control.

What is AI Regulation?

Artificial intelligence regulation encompasses the legal frameworks and policies set by governments to oversee the growth and application of artificial intelligence. Regulations are designed to minimize potential risks, while encouraging the responsible and ethical use of AI technology.

It’s a careful balancing act. Regulation can protect individuals from potential risks—like bias, privacy violations, hallucinations and incorrect information—but too much regulation can stifle innovation.

There are also concerns about the pace of legislation. The rapid pace of innovation combined with the length of the regulatory process makes it difficult for legislators to keep up with emerging applications of AI and the associated risks, potentially leaving regulations out of date.

Canadian leaders in the tech sector have cautioned that legislation may slow down the pace of innovation, and that vague or overly complex rules could hamstring development. There is also a belief that AI in its current form is not high enough risk to require rigorous legislation.

Implementing AI regulation has also proven complicated. The EU AI Act, which passed in March 2024 and went into force in August 2024, is the first (and currently only) binding treaty passed that regulates AI. Many other countries, including Canada, are in the process of developing AI legislation, with varying levels of success.

Losing Grip on the Rope: AIDA’s Stalled Progress

Canada's own regulatory efforts when it comes to AI have ultimately “languished and died in a parliamentary committee, unable to secure the confidence and political will needed to proceed through the legislative process.”

AIDA was first introduced in June 2022 in Bill C-27 and applies to private sector organizations that design, develop, or make available for use AI systems in interprovincial trade and commerce.

AIDA was intended to "protect Canadians, ensure the development of responsible AI in Canada, and to prominently position Canadian firms and values in global AI development." But its critics claimed it would do none of that.

Similar to the EU AIA, AIDA took a "risk-based" approach to legislation, aiming to balance protection with innovation. The Act prohibited certain harmful conduct without banning specific AI use cases outright. Instead, it required AI providers to self-assess whether their systems were "high-impact systems"—a term that was never fully defined—which would then need to comply with safety standards

The Act also prohibited reckless and malicious uses of AI that would cause serious harm to Canadians, such as systems created with unlawfully obtained data or systems likely to cause serious psychological harm.

Critics, including the Centre of International Governance Innovation (CIGI), and the Information Technology and Innovation Foundation (ITIF), claimed AIDA could harm innovation and was poorly executed. They argued that many potential harms caused by AI could be addressed by amendments to existing laws rather than new legislation.

Innovation groups also worried that AIDA governed a considerably larger number of systems compared to the EU AIA, potentially placing a greater legislative burden on Canadian AI companies and complicating Canada's ability to harness the economic advantage.

That said, recent data shows Canadians are divided on the development of AI technologies, with 30% saying it's good, 34% believing it's bad, and 36% not knowing enough to say. The Canadian public also doesn’t trust AI: only 31% said they trust the technology; 19% lower than the global average. CPA Ontario’s December 2024 member survey revealed a similar split among CPAs, with 30% stating they trust AI, and 30% stating they do not trust it. This lack of confidence and trust suggests there is a role for AI regulation to help the Canadian public understand and feel more comfortable with AI's benefits and risks.

Canada faces a complex challenge in developing appropriate AI regulation. The federal government needs to determine a path forward that balances responsible AI use without chilling investment and slowing innovation.

The Role of the CPA in AI Governance

The current AI regulatory environment presents both a challenge and an opportunity. With federal AI regulation at a standstill, organizations require strategic guidance to navigate a quickly shifting landscape. As covered in our 2024 whitepaper, CPAs are emerging as potential leaders in AI governance. Their expertise in risk management, compliance, and financial oversight puts them in a unique position to develop frameworks that address the challenges of AI.

Outside of governance, CPAs should explore how AI technology can be applied to their own work. The December 2024 CPA Ontario member survey revealed important insights: 45% of CPAs view AI as an opportunity, while 19% consider it a threat. Most significantly, 79% identified focusing on high-value work as AIs primary benefit, with 78% recognizing governance of AI as an important opportunity for the profession. Firsthand experience will make it easier to advise clients and employers on AI governance. When using AI tools, CPAs must apply the same rigorous professional standards and ethical principles that have always guided their work. As explained in CPA Ontario’s Regulatory Standard—there's no algorithm for ethics or good governance. The core principles of professional judgment, ethical behavior, and public protection remain the foundation.

CPA’s capacity to deliver independent, objective assessments positions them at the critical intersection of technology, governance, and trust. By leveraging professional expertise, CPAs can play a pivotal role in guiding responsible AI adoption during this period of regulatory uncertainty.

Finding a Balanced Approach

With AIDA stalled and uncertainty around who will govern Canada after the next federal election, it's difficult to predict how Canada will approach AI regulation going forward. What is certain is that the applications of AI will proliferate, and adoption will grow.

Canada finds itself in the middle of an international AI regulation tug of war, trying to balance competing interests. Lean too far toward the EU's approach, and Canadian tech companies could struggle to innovate. Pull too hard toward the U.S.'s hands-off model, and Canadian tech companies’ risk losing access to European markets and Canadian citizens are left vulnerable. The outcome, whichever side prevails, will determine Canada's role in the rapidly evolving global AI ecosystem.