(216) 348-9600 info@peasebell.com Mon - Fri: 8am - 5pm Make a Payment

Potential AI Risks the Professional Service Industry Will Have to Reckon With?

Written By: Ryan Farragher, CPA, MBA
Nov 7, 2023

Back Pease Bell Media Posts


The term "artificial intelligence" has been around for over 50 years; however, it has gained more popularity now than ever before thanks to the release of OpenAI's ChatGPT. ChatGPT is an advanced chatbot that can hold back-and-forth discussions with users. Businesses are currently deciding how they can effectively utilize this technology. Corporate earnings calls reportedly saw an increase of up to 77 percent in mentions of AI from a year ago, thanks to ChatGPT's impressive ability to quickly generate responses and engage in intelligent conversations with users. Having access to endless amounts of data, it is not hard to foresee a future where this technology can answer almost any question. AI is predicted to have the greatest impact on specialized service industries, including professions such as doctors, lawyers, accountants, architects, and software engineers. If the predictions prove true, these industries will need to be mindful of the risks they face after incorporating the AI technology into their business operations.

Despite its recent success, flaws with ChatGPT have been discovered since its release. A major weakness of large language models (LLM) is their reliability. A high-profile example of this unreliability involves a New York City lawyer who submitted a court briefing that cited multiple fabricated court cases, quotes, and other misleading information. The lawyer admitted to relying on ChatGPT to find relevant court cases, believing that the LLM's responses could not be false. After receiving ChatGPT's fabricated court cases, he even asked ChatGPT if the cases were real. The chatbot assured him that the cases were authentic. Instead of reviewing the facts and sources he received, he regrettably relied on the chatbot. His reliance on AI caused irreparable damage to him and his law firm's reputation. It also resulted in fines and penalties. Had he exercised due diligence, checking the reliability of information the program offered, he could have avoided the fines, penalties and embarrassment that resulted.

OpenAI provides contradictory information on its reliability and utility. On its website, it claims that ChatGPT can be used as a tool to "get instant answers" and "learn something new." Sam Altman, CEO of OpenAI, even describes ChatGPT as a time saver when used to summarize lengthy articles and books. The company also acknowledges flaws and provides many warnings when using the program, stating that the information is often wrong or misleading and should not be used as advice. This begs the question: How can we get answers or learn something new if we cannot rely on the information?

To test the extent of ChatGPT's inaccuracies, a Duke professor gave his students an assignment, instructing them to generate essays using ChatGPT based on prompts that he gave them. The students and professor learned that all 63 essays had some type of fabricated or false information. The papers were full of fake quotes, false sources and misrepresented or mischaracterized facts. The professor said he expected to find inaccuracies, but not at the rate they were discovered.

Another concern with using AI for business operations is our limited knowledge in how AI works. AI is designed to constantly improve its output as it is fed more information. Therefore, it is not a static source; it is always changing. This model operates differently from typical software used. For example, when entering numbers and functions into excel, it will always provide the same, predictable result. However, AI may provide different conclusions over time as it accumulates new data. If ChatGPT is repeatedly asked, "Who the greatest football player of all time," it can supply a different response each time. At times, the answers may be the same, just worded differently, but as time goes on, the more information it accumulates the more likely its conclusions will change. With no way of knowing how or when AI will make a change or correct itself, users may be unknowingly using an outdated application of the AI's output.

OpenAI is not currently regulated. In fact, no one knows how it will be regulated in the future. There have not been any successful cases against AI for providing false information. AI companies claim they have the same protections as other large tech platforms under Section 230 of the Communications Decency Act. Section 230 shields companies from legal liability for third-party generated content. Until platforms like ChatGPT can provide a greater level of assurances to specialized professionals, careful considerations need to be used when AI products are being applied.

Professionals are encouraged to look for new ways to incorporate AI in their businesses, but must evaluate AI shortcomings and implement procedures to mitigate risk involved in their use as they will be held liable for the misuse of generated information. Developers of ChatGPT and other AI companies continue to work on newer versions of their products that they claim are more relevant and accurate. The industry is moving rapidly, and it won't be long before these companies are selling industry-specific AI products to professional firms. Using these products as they become available secures a competitive advantage for professionals, but these advantages must be balanced with the risks involved.

There is no denying that AI is an incredible technology, but there is a tendency to overhype emerging technologies and overlook the potential risks associated. For example, crypto currency gained popularity after people began to recognize its potential. However, companies, like FTX, took advantage of this by promoting all the hype and keeping investors ignorant of the risks. Until AI develops a way to verify and review its output and provide users with guarantees of accuracy and relevancy, we will still need professionals to provide these assurances.


Back Pease Bell Media Posts


  • Akron
  • 3501 Embassy Pkwy, #200
  • Akron, OH 44333
  • Fax - 216.348.9610
  • Phone - 330.666.4199
  • Cleveland
  • 1111 Superior Ave E, Suite 2500
  • Cleveland, OH 44114
  • Fax - 216.348.9610
  • Phone - 216.348.9600
  • New Jersey
  • 411 Boulevard Of The Americas Suite 503
  • Lakewood, NJ 0870
  • Fax - 216.348.9610
  • Phone - 216.348.9600

© 2024 Pease Bell CPAs