Is AI moving too fast for regulators to keep up?

By Mathieu de Lajartre
Oct 18, 2023
Photo credit: teekid/iStock/Getty Images

As the first laws to regulate the use of artificial intelligence start to emerge, industry expert Olivier Blais explains how to prepare for their implementation.

Artificial intelligence (AI) technologies are creating new opportunities for businesses, enabling them to achieve major efficiency gains. But given the speed at which new projects tend to move through their various stages, these opportunities also come with new risks, particularly in terms of ethics and privacy.

As Canadian regulators work to find solutions, organizations should not wait to comply with the new laws that will soon come into effect, advises Olivier Blais, co-founder of Moov AI, a company that provides AI solutions for organizations.

Blais also represents Canada on the ISO Standards for AI systems (he is chair of the Canadian delegation for ISO/IEC 42 (AI)) and is helping to shape the Canadian legislative framework.

Here, Blais takes stock of recent developments and what to watch for.

Can you tell us about what has been happening in AI and what it means in practical terms for organizations?

Olivier Blais: Until recently, it took thousands of data points to launch a project, because each one was unique. Today, we are increasingly reusing existing solutions (including those from competing suppliers) that have already been trained on a wide variety of data. As a result, we need much less data to obtain results, although we still develop client-specific solutions for complex cases.

In accounting and finance, generative AI (the same technology as ChatGPT) is increasingly being used to audit data, interpret financial statements and accounting standards, forecast sales and help make financial decisions.

These AI applications are optimized based on context—i.e., best practices, internal company policies and robust accounting standards.

Where does Canada stand in terms of regulation?

Olivier: Like the European Parliament, which recently approved the Artificial Intelligence Act to regulate the industry, Canada aims to play a leading role in the global framework for AI with the Artificial Intelligence and Data Act (AIDA), contained in Bill C-27.

The legislation is expected by industry experts to come into force sometime in 2025, although some believe this deadline is too far away. AIDA will focus on transparency, be drafted in reader-friendly language and evolve along with technological developments. Broadly speaking, the legislation states that anyone interacting with an

AI system must know they are doing so, and must be able to identify the system’s strengths and weaknesses and the elements taken into account to make predictions.

AIDA will require organizations to detail their AI solutions and put controls in place to mitigate risks. Many solutions will be unaffected if the impact is low. But individuals with malicious intent or proposing high-impact AI systems will be scrutinized and held accountable.

Ethics and privacy are central to the CPA profession. Yet AI innovation projects raise many questions in this area. Should we be concerned?

Olivier: The risks are real. Simply making a request in the basic version of ChatGPT exposes you to a security breach, since the bot stores your information on OpenAI’s servers in California. And the results are not subject to any validation or control.

For example, imagine an AI system offering dubious medical advice or making a wildly bizarre forecast because it took into account an outlier piece of data. Or think of the legal repercussions of a text containing confidential or copyrighted information. Not to mention how easy it is to circulate false information or commit identity theft and fraud.

To build public trust, organizations need to identify risks now, quantify them and reduce them as much as possible. Otherwise they could face penalties.

It’s time to put adequate safeguards in place. Where do we start?

Olivier: While we plan to use methods drawn from internationally approved models, the idea is to develop sound risk management systems based on strong principles. For example, a system must never be harmful to the humans who use it, and there must always be someone who can be held accountable.

This requires better governance, even if it means setting up a data governance board and guarding against certain biases (particularly discriminatory ones), which are a major risk for businesses.
While there won’t be a flurry of fines overnight, it will be possible, under AIDA, to issue monetary penalties and prosecute regulatory offences.

While waiting for AIDA, what practices can organizations adopt now on a voluntary basis?

Olivier:  Several ISO standards relating specifically to AI will soon be issued, including ISO/IEC 42001 (AI Management Systems), which will be the main standard for assessing conformity. Other essential references at the moment include the ISO standard 27000 (which provides an overview of information security management systems), as well as 27001 (on best practices for creating an information security management system) and 27701 (on privacy protection).

In Canada, we have the Personal Information Protection and Electronic Documents Act (PIPEDA), but because digital knows no borders, we need to consider the European Union’s General Data Protection Regulation (GDPR), as well as its European Union’s artificial intelligence legislation.

Here in Canada, there is also a pilot project underway to define and test requirements for a conformity assessment program for AI management systems. The National Institute of Standards and Technology also launched a risk management framework, which includes a list of 72 risks to assess.

Are you still confident that everything will go smoothly?

Olivier: Absolutely, provided we take the necessary proactive measures to comply with the law and limit the risks to users. It’s a bit like aviation: We no longer worry about whether an aircraft manufacturer has taken all necessary measures before we board an aircraft. This will also be the case with artificial intelligence.

Mathieu de Lajartre joined CPA Canada in 2015 after 15 years in the book industry (mainly business publications). Based in Montreal, he is the Associate French Producer for the digital platform, specializing in producing content for French-speaking readers. Mathieu is also responsible for the French-language version of Pivot, CPA Canada’s magazine.

Originally published by CPA Canada's news site.

In Other News