Canada's Artificial Intelligence and Data Act died on the order paper when Parliament was prorogued, and no replacement bill is on the horizon. While AIDA was alive, organizations had a direction to plan against: risk tiers, documentation obligations, compliance timelines. The prorogation erased that planning horizon overnight.
If your organization was waiting for federal clarity before building AI governance, you just discovered the wait is indefinite. Here is what is actually enforceable right now, what is coming from outside Canada's borders, and what you should be doing regardless.
What's Actually In Force Right Now
Quebec's Loi 25 has been fully in force since September 2024 and applies to any organization that handles personal information of Quebec residents, regardless of where your headquarters sits. If you process data about someone in Quebec, you are subject to it. Key requirements:
- Privacy impact assessments for any system processing personal data
- Explicit consent requirements for automated decision-making
- Mandatory designation of a privacy officer
- Breach notification obligations
Ontario's Working for Workers Act (Bill 149) introduced Canada's first AI-specific employment provision: employers using AI in hiring must disclose this to candidates. PIPEDA still governs federally regulated private-sector privacy, and the Privacy Commissioner has made clear that AI systems processing personal data fall squarely under existing requirements.
The International Pressure You Cannot Ignore
The EU AI Act is not a European problem. It classifies AI systems by risk level and imposes obligations ranging from documentation and transparency to outright prohibition for the highest-risk categories. Critically, it applies to any AI system whose outputs affect people within the EU, regardless of where the system operates from.
For Canadian organizations with European clients, partners, or suppliers, this means due diligence requests about your AI governance practices are coming, if they haven't arrived already. Quebec's deep European business ties make this an especially large category.
ISO/IEC 42001 is rapidly becoming the recognized benchmark for demonstrating responsible AI practices, much as ISO 27001 became the baseline expectation for information security. European procurement teams are increasingly asking about it as part of vendor evaluations.
What You Should Be Doing Now
Do not treat the federal vacuum as permission to wait. Organizations that build governance now will have the structure to adapt when Canadian legislation eventually arrives. Those that wait will face compressed compliance windows against deadlines they are not prepared for.
Start here:
- Conduct privacy impact assessments on every AI system that touches personal data
- Build a real inventory of your AI systems, documenting what each one does, what data it processes, what decisions it informs or makes, and who owns its behaviour
- Classify systems by risk: a chatbot answering product questions and a model screening job applicants require fundamentally different levels of oversight
- If you have any European connection, start mapping your AI systems against EU AI Act risk categories now
- Stop treating governance as something you build after the law arrives, and start treating it as the operational discipline it actually is
The organizations building governance today will adjust smoothly when legislation catches up. The ones treating the regulatory vacuum as a green light will discover, too late, that the absence of a federal law never meant the absence of obligation.