AI and accounting: Privacy, security, and compliance for CPAs

Futuristic Blue Cubes. Biometric fingerprint authentication system. Big Data, Innovation, AI, Cloud Technology and Cybersecurity Concept.
Photo credit: luchezar/iStock/Getty Images

AI expert Garrett Wasny describes how CPAs can add value to their roles and take on emerging opportunities by integrating AI privacy, security, and compliance.


Artificial intelligence in accounting is changing the game for CPAs, and fast – one recent report states 93% of accounting professionals are now using AI to deliver strategic business advisory services. Amidst this shift, AI privacy, security, and compliance are essential to CPAs in order to avoid risk and protect trust with clients and stakeholders. As a certified management consultant and self-described AI evangelist who has trained thousands of accountants, Garrett Wasny shares his front-row perspective on how CPAs can add value to their roles by integrating AI privacy, security, and compliance.

What different types of AI do CPAs encounter at work?

The definition of AI is often too narrow – people might only think about ChatGPT or Copilot. There are at least six types of AI that accountants could access as part of their workflows:

  • Generative AI: Tools like ChatGPT, Copilot, or Perplexity that have broad utility and can handle a wide range of requests and tasks.
  • Embedded AI: Software tools that are embedded with AI, for example, QuickBooks has an embedded AI chatbot and Adobe has an AI assistant.
  • Infrastructure AI: Back-end systems that process and store data in the cloud. Organizations like Oracle, Databricks, and Amazon Personalize utilize these.
  • Industry-specific AI: Tools built to focus on specific industries like construction, healthcare, or real estate; these are designed to use specific data and perform specialized tasks.
  • Regulatory compliance AI: Risk intelligence tools like LSEG World-Check that monitor consumer and business identity, payments, and compliance risk to help organizations comply with mandatory regulations for anti-money laundering, counter terrorism financing, and more.
  • Client-controlled AI: Proprietary, customized AI models that are built by individual companies to perform customized tasks.

What’s top of mind when you think about AI privacy, security, and compliance?

These are the foundation to work with AI in accounting in a trustworthy and ethical way. Security means you are keeping the bad actors out – preventing unauthorized access, attacks, or data misuse – by scanning your work landscape, monitoring access to controls, and encrypting data. Privacy encompasses systems and processes that protect your crown jewels like personal identifiable information and client data that you do not want disclosed without proper consent. With compliance, just as accounting has standards and codes of conduct, AI has regulations. Major ones include the General Data Protection Regulation, California Consumer Privacy Act, Cloud Security Alliance, and the Texas Risk and Authorization Management Program.

Interested in learning how you can use AI to work faster and smarter? Check out the wealth of AI-focused seminars led by Garrett Wasny in your PD Program.

Why are AI privacy, security, and compliance increasingly important for CPAs?

These three things are basically job one, because if client data or confidential information is accidentally released, the consequences can be devastating. However, you can’t let misgivings about this technology hold you back from using AI in a secure, ethical way. Some people think uploading information online or with artificial intelligence is a threat to security or privacy, but if you’re using the Microsoft 365 Suite, like Word, PowerPoint, Excel, or Teams, your data is already online and in the cloud. Security levels for AI tools like Copilot, ChatGPT, or Gemini are the same, if not higher, than those for Microsoft. When you use AI tools, you aren’t undergoing a brand-new set of exposures, it’s more like a lateral move.

Are AI tools safe to use with confidential data?

These tools have plans or tiers that have different security features. With ChatGPT for example, the Team plan aligns with many of the regulatory frameworks I mentioned, has a number of security features, and by default your data is encrypted and excluded from being used to train the model.

AI tools might store user input for model improvement and can monitor your conversations, threads, and interactions. With ChatGPT, if they are using your thread for training, they will capture the metadata: date, time, general questions asked. Information is anonymized and stored so that you can interact with it to produce the results you want. You can turn off your conversation history to limit your data storage to 30 days. Because employees at AI companies may monitor your interactions when using these tools, to maximize privacy you should avoid sharing sensitive or confidential information.

What are some ways to use AI ethically?

It’s critical to have an AI usage policy – it can be as simple as starting with a one-page compliance framework. Consent is also key – you need a written agreement with clients that:

  • Gives you permission to use AI tools on their work;
  • Outlines what will be done (for example, AI will be used to create first drafts of reports or do scenario planning);
  • Confirms the work will be verified and checked by a CPA; and
  • States that the workflow will be documented and disclosed.

Be transparent about how you use AI. Document both the specific platform and model you used (for example, ChatGPT 5) and the link to where the output was generated to capture your audit trail. Your interaction log should capture what tool was used, what data was entered, and what decisions were made based on that AI output.


Read more


How can CPAs use AI to add value to their roles?

When it comes to AI in accounting, it’s clear that there will be a huge opportunity for CPAs in AI due diligence. CPAs will be called upon to oversee artificial intelligence, related inputs and outputs, verification, and documentation. Responsibilities might also include looking at the technical parts of AI models, ensuring models are explainable and transparent, explaining how AI came up with decisions, and ensuring organizations are compliant with AI regulations. This work will involve continuous assessment, not just one-time reviews of vendors or AI users for regulatory or insurance purposes.

What are the risks of not using artificial intelligence?

Some companies ban AI use, but then people use it under the table. If people take client or company information and upload it in a rogue manner, it could expose the firm to reputational, legal, and regulatory risks. Competition-wise, most CPAs are using AI in some form – to write emails, analyze financial statements, or personalize reports. Work that used to take hours now takes minutes. By not using AI, you’re putting yourself at a major competitive disadvantage by giving others more time to experiment and get deeper efficiency with these tools. Another dimension is talent – you’re not going to attract the best people if you refuse to use new tools like AI. As for productivity risks, manual workflows and manual data entry will clearly not be sustainable under AI.

As professionals we need to get our work done, but we often start with a blank page. With AI, you can go from 0 to 80% in seconds. Of course, you will need to vet and fact-check AI outputs line by line. But in terms of how much you can do, it’s an incredible accelerator that allows us to work smarter and faster in ways we’ve never seen before.


Leah Giesbrecht is a communications specialist at CPABC.

In Other News

Cybersecurity
By Melissa Robertson Oct 23, 2025
Cybersecurity
By Leah Giesbrecht Oct 14, 2025
Cybersecurity
By Liza Agrba Aug 20, 2025
Cybersecurity
By Leah Giesbrecht Jun 5, 2025