The trust problem with AI

Network Attack. Broken Security System Concepts
Photo credit: BlackJack3D/iStock/Getty Images

Why audit firms are entering the business of AI assurance

As artificial intelligence (AI) continues to reshape everything from how companies hire to how doctors diagnose, there’s an issue that’s become increasingly clear: people don’t trust AI. A recent global study from KPMG and the University of Melbourne, “Trust, attitudes and use of artificial intelligence,” finds that one in two people in Canada report using AI tools for personal, work or study purposes on a regular or semi-regular basis, which is much lower than the global average.

In contrast, Canadians are significantly less eager—only 34 per cent of respondents are somewhat willing to trust the outputs of AI systems. With global adoption surging but confidence lacking, the trust problem needs tackling. This is where CPAs are stepping in.

From financial statements to algorithms

For accounting firms that already deliver a broad range of services, from traditional financial statement audits to niche and nuanced industry advisory, AI assurance is emerging as a new area where CPAs can play a critical role. Demand is also growing. Organizations are seeking help to manage and report on various aspects of their AI development, deployment, use and oversight, from the reliability of their algorithms to the robustness of their management systems and governance.

CPA Canada’s recent whitepaper on AI assurance, Closing the AI trust gap: The role of the CPA in AI assurance, highlights the pivotal role that assurance engagements, adhering to related professional standards, play in improving the quality and reliability of information used by decision makers. The report emphasizes that just as CPAs have long helped stakeholders trust financial information, they can also help them trust intelligent systems.

Why CPAs?

CPAs have a history of working at the intersection of business, technology and regulation. As AI is increasingly making high-stake decisions, that legacy matters. The profession is well acquainted with evaluating how information systems work and assessing risk, whether the system is a financial ledger or a machine learning model. CPAs possess knowledge of controls, experience in evaluating processes and controls against established criteria, and skills in assessing and responding to risk, which provides a solid foundation for offering assurance services related to AI systems.

It would be naïve to say that these services will be offered by CPAs alone. Depending on the subject matter involved, these engagements will likely include experts from relevant domains, such as AI engineers, scientists, and privacy and legal specialists, as part of the engagement team.

Industry moves

Accounting firms aren’t waiting on the sidelines. PwC Canada recently announced the launch of its Assurance for AI services—a move that signals how seriously firms are considering this emerging space.

Member firms of Deloitte, including its office in the U.K. and Australia, are offering algorithm assurance to help organizations manage their AI system algorithms and ensure conformity and compliance with regulatory requirements. Other firms, like EY and KPMG, are increasingly offering services for AI systems that address embedding responsible AI governance and assessing risk, security and compliance with emerging standards, and regulatory schemes such as the EU AI Act.

These offerings span the gamut of consulting and advisory services through to independent assessments of AI governance, risk and control frameworks—services aimed at helping clients deploy AI responsibly and transparently.

Accounting firms, however, are not the only players in this space. Consulting firms, technology advisories, and even AI insurance and algorithmic auditors are attempting to meet market demands. Given the proliferation of the technology and the spectrum of industries, tools and maturity levels, there will likely be no shortage of potential AI services in the foreseeable future.

The road ahead

As AI becomes more powerful and pervasive, the demand for oversight will only grow. Greater collaboration between regulators, businesses and assurance professionals is needed to establish how trust can be built. Clear and consistent criteria for auditing AI systems are still emerging, but more work is needed both in the market and the profession to establish practical, uniform approaches to assess AI systems.

Education and upskilling will also be critical, as CPAs adapt their competencies to address technical and ethical questions that didn’t exist a few years ago. Not only will this involve upskilling, but staying on top of a technology that’s constantly evolving to higher levels of capability and complexity.

In a world increasingly run by algorithms, trust doesn’t just happen: it’s built, tested and verified—and CPAs are stepping up to make that possible.


Melissa Robertson leads CPA Canada’s Audit and Assurance Technology Committee and develops research and thought leadership related to emerging trends, innovation, and technology. She is also a member of the Standards Council of Canada’s Data Governance and Artificial Intelligence Standardization Collaborative and leads CPA Canada’s research and thought leadership projects related to artificial intelligence.

Originally published by CPA Canada.

In Other News

Cybersecurity
By Leah Giesbrecht Oct 14, 2025
Cybersecurity
By Leah Giesbrecht Sep 11, 2025
Cybersecurity
By Liza Agrba Aug 20, 2025
Cybersecurity
By Leah Giesbrecht Jun 5, 2025