Audits, forecasting, risk analysis, client communication—the potential applications of AI are endless. But there are still challenges and areas where the technology falls short.
At first, it was a party trick. Ask a chatbot to write a sonnet about solvency or explain audits like you’re five—amusing, but hardly transformative. Now, less than three years after ChatGPT’s initial launch, Canadian accounting firms are quietly inviting artificial intelligence into their workflows. But beyond early operational gains, a deeper shift is underway: a redefinition of what accountants do and what decisions only humans should make.
Embracing AI disruption
One of the firms navigating this shift is BDO Canada, where adoption is visible if measured. Microsoft Copilot rephrases emails and summarizes internal meetings; a private enterprise agreement with OpenAI enables secure internal prompting; and a custom-built tool, BDO Boost, is being trained on firm-approved documents to help standardize audit workflows.
The rollout is carefully governed, with one non-negotiable principle: no tool operates independently. AI can draft, analyze or structure—but a person reviews, interprets and finalizes. “There’s always someone reading the output before it goes out the door,” says Brion Hendry, BDO Canada partner and assurance leader for the Greater Toronto Area.
For now, AI at BDO is doing what machines are best at—summarizing long documents, parsing structured data and outputting repeatable templates. In one pilot, auditors feed client system notes into BDO Boost, which identifies control points, flags missing information, and generates a visual map of risks and procedures. The chart doesn’t replace the auditor’s analysis, but it gives them a head start. OpenAI’s enterprise tools support other complex uses, but within tight internal limits. Anything client-related is strictly barred from public-facing models.
Accounting is a trust-based business
The constraint is ethical as much as it is technical. Accountants aren’t just in the business of efficiency; they’re in the business of trust. That means treating AI less like a magic solution and more like a high-powered assistant that can’t be left alone with the keys. That’s especially true in assurance work. “We don’t allow AI to generate working papers or analyze agreements directly,” Hendry says. “The output is not reliable enough yet to trust it without judgment layered on top.”
John Craig, co-founder and CEO of Vigilant AI, is building AI capacity with a similar philosophy. His firm’s platform reliably extracts fields from source documents, such as contracts, invoices, and receipts, and automatically cross-references those fields against general ledger data. It can flag exceptions, validate transaction populations, and produce a “shadow ledger”—a reconstructed financial map auditors can test for accuracy. But Craig is clear on one fundamental: “The auditor stays in the loop. Judgment stays with the human. Always.”
New tools, new responsibilities
That division of labour—machines for speed, humans for judgment—may be emerging as the profession’s new normal. But it also introduces a host of new responsibilities: How do firms govern tool use? Who trains the humans in charge of the machines? And how do you prove to clients and regulators that an AI-assisted audit is still an audit?
Those questions aren’t hypothetical. Dr. Jodie Lobana, who advises and trains boards and public institutions on AI governance, ethics and risk management through her firm AIGE Global Advisors, has seen organizations adopt AI tools before defining what counts as appropriate use. “If your people are already using AI and you don’t have a policy in place,” she warns, “you’re at risk.”
Lobana advises on building internal safeguards: enterprise licenses with strong data protections, clear usage policies and privacy-first practices that bar identifiable client data. She also emphasizes the need for explainability. “You need to be able to explain, at least at a high level, how the AI got from input to output,” she says. Explainability also helps mitigate concerns about bias—and firms need to know and be able to communicate not just what AI concludes, but why.
This is where tools like BDO Boost and Vigilant AI aim to distinguish themselves. Both keep models fully contained within the organization, with no internet access or outside data scraping. At Vigilant, every connection between source documents and transactions is cross-indexed and visually verifiable—what Craig calls a foundation for “auditing the audit.”
Effective training is key
Still, safe tools are only half the equation. Training is the other. At BDO, a network of “digital champions” leads local onboarding, while bootcamps and an in-house program called FutureCraft help staff learn how to prompt, prototype and refine their own automations. The idea is to shift the mindset: from seeing AI as a black box to treating it like a colleague (albeit with a very literal way of thinking).
So far, AI hasn’t replaced any roles at BDO but that doesn’t mean roles won’t change. “We haven’t seen big shifts yet, but it’s coming,” Hendry says. “The tools are getting better and the way people work will change with them.” Forecasting applications remain limited, but firms are beginning to explore how AI might support forward-looking analysis as tools mature.
That change could reshape what makes the profession appealing. If auditors spend less time documenting controls and more time understanding client strategy, the job starts to look less like compliance and more like analysis. “AI can help us focus more on the business, more on insight,” Hendry says. “I think that’s what’s going to make audit attractive again.”
Weighing risks and opportunities
Of course, opportunity always arrives with risk. Balancing innovation with control is the challenge Hendry points to first. Since AI evolves in real time, but the CPA profession is built on structure and regulation, managing that tension is becoming a critical leadership skill in this space.
Lobana sees a broader concern: regulatory lag. “Technology is advancing at a much faster pace than the regulations meant to govern it,” she says. “That’s a risk in itself.” Hendry puts it more bluntly: “The standards need to come in line with the technology that’s available. We want to do this right, but we also want to move forward.”
Still, firms now leaning into AI with care and clarity may be better positioned to thrive as the tools mature. AI may not be ready to handle the hardest parts of the job—it can’t reliably interpret tone, weigh context or apply professional skepticism, although Lobana argues that when provided with enough information and context, the tools are getting there.
For those still on the sidelines? AI won’t replace CPAs, but CPAs who can use AI might replace those who can’t.
Liza Agrba is a freelance writer whose work has appeared in Toronto Life, the Globe and Mail, Macleans, Chatelaine and more.
Originally published on CPA Canada.