CPAs and Artificial Intelligence: A Case Study

Regulatory Standard Edition: October 2024 | Published: November 18, 2024

Across Ontario, CPAs are using artificial intelligence to focus on strategic and advisory services, increase productivity and enhance client service. However, CPAs who have begun deploying artificial intelligence in the course of their work need to be aware of their obligations laid out in the CPA Code of Professional Conduct. When using AI, CPAs must remember that there is no algorithm for ethics, professional standards or good governance, and that accountabilities for protecting the public and upholding the highest standards of the profession still apply.

The following case study illustrates the risks and the obligations for CPAS in the use of AI, and how CPAs can mitigate those risks.

Case Study:  Large Language Models (LLMs) and Data Hallucinations*

*Note that this case study is fictitious and used for illustrative purposes only.

Richard is a CPA working for a steel manufacturing firm with clients on both sides of the Canadian and U.S. border. To help him prepare his reporting for the 2024/25 tax year, Richard uses an “off the shelf” large language model (LLM) AI tool to assist with data collection and research.

Richard uses this AI tool to research, analyze and summarize recent regulatory changes in both Canada and the United States, including updates to the tax codes, international standards and recent SEC and OSC filings. Richard uses this summary to inform the completion of his filing on behalf of his company.

However, in the process of analyzing the tax code for Canada and the U.S., a general data generation hallucination occurs. A general data generation hallucination refers to a situation where an AI tool generates an output that may appear accurate and coherent but is not supported by factual or reliable information.

For Richard, the large language model he used referred to a line in the U.S. tax code that did not exist. As a result of this error, Richard’s organization is forced to pay a significant fine in the United States, and Richard is brought before the CPA Ontario Professional Conduct Committee.

What could Richard have done differently?

The same obligations set out on the CPA Code of Professional Conduct that govern the use of any software apply to the use of artificial intelligence, as do the same accountabilities and responsibilities.

CPAs should ensure that there is governance in place for how organizations identify risks associated with the use of AI, including data hallucinations, the introduction of data bias and cybersecurity, and develop the appropriate mitigation strategies.

Likewise, organizations need to ensure they are building the necessary technical expertise to support that governance.

It was Richard’s responsibility to ensure proper governance was in place to address the potential of data hallucinations. Richard was ultimately responsible for ensuring the accuracy of the information presented, which included verifying the information provided through the AI tool.

While AI offers great potential for CPAs, there is also the potential of damage to the reputation of the individual CPA and the profession if the risks of AI are not fully understood and addressed.  As tools and technology change, CPAs should continue to look to the CPA Code of Professional Conduct for guidance.