A compliance officer reviewing a Law 25 AI deployment radar

Quebec’s privacy regulator spent the first two years of Law 25 acting in collaboration. The CAI’s stated approach was to help organizations comply, not to punish them. That posture is now shifting. The Commission has signalled that routine compliance audits will scale through 2026, and its enforcement focus is explicitly on systematic governance failures in AI deployments, not one-off privacy incidents.

Most Quebec organizations still treat Law 25 as a privacy question their general counsel answered in 2023. They completed the consent notices and the data inventory and moved on. Section 12.1 covers decisions most of them do not realize are automated.

What Section 12.1 Actually Says

If your organization makes a decision about an individual exclusively through automated processing, and that decision produces legal or similarly significant effects, you must notify the person no later than when the decision is communicated. You must disclose the personal information used, the principal factors behind the outcome, and the right to request human review. Built it in-house or bought it off the shelf: same obligation.

That definition quietly pulls in AI resume screeners, credit and insurance scoring models, chatbots resolving account decisions without a handoff, analytics that auto-flag accounts for closure, and internal copilots whose decisions staff rubber-stamp. Most of these live in HR, customer service, or client operations, not in a data science team that knows what an algorithmic impact assessment is.

The practical problem: most organizations have not mapped their AI surface area to this obligation. I have seen compliance officers who can list every data processor contract yet cannot name the AI systems in their own HR pipeline. That gap is the audit risk.

What I Would Do This Quarter

Inventory every AI tool in use, including ones embedded in SaaS platforms (LinkedIn Recruiter, your HRIS, your CRM, your support platform). For each one, two questions: does it produce a decision that affects an individual, and is the decision made exclusively through automation with no meaningful human review? A reviewer who approves 199 of 200 daily AI recommendations is not exercising meaningful review. The CAI has signalled it looks at practice, not policy language.

Every tool in scope needs three artifacts: a Section 3.3 privacy impact assessment, a Section 12.1 disclosure notice, and a documented human review mechanism. Missing any of these is what gets flagged.

Where ISO 42001 Earns Its Keep

This is where ISO 42001 becomes useful operationally, not just as a logo. The Annex B controls require the same inventory, impact assessment, and human oversight documentation Law 25 expects. One compliance exercise can satisfy a provincial regulator and an international client’s vendor assessment at the same time.

I spent 21 years in military aviation working inside AF9000+, the military’s ISO 9001-based quality management system. AI governance is the same mechanics: documented system, clear ownership, measurable controls, evidence of practice. What you cannot demonstrate, you cannot claim.

A Preview of What Comes Next

Quebec organizations have one structural advantage. Law 25 is a preview of what Canadian federal AI regulation will look like when Minister Solomon’s renewed strategy lands. Getting ahead of Section 12.1 is not just provincial compliance. It is a rehearsal for whatever replaces AIDA.

The ordinary failure mode is not exotic. It is an analytics tool, a hiring screener, or a chatbot running without a PIA, without a Section 12.1 disclosure, without a documented human-in-the-loop. If your organization has not inventoried its AI tools against Law 25 in the past twelve months, that is the conversation an auditor will start.

← Back to all posts