AI affects professional liability
Accountants using generative artificial intelligence (AI) need to embrace risk protection policies.
Accounting regularly finds itself at the forefront of technical advances as it considers how to incorporate them into everyday operations.
Every new tool or platform that comes along often promises efficiency and accuracy, but also brings with it new risks and vulnerabilities.
This time around it is generative AI, a topic which has recently received extensive publicity after the launch of a leading AI tool that has received mixed reactions.
Results accuracy
To help generate responses to a user’s questions, generative AI models are trained on a vast body of data. The current leading generative AI model has been trained on a massive corpus of data consisting of books, articles and web pages covering a very wide array of subjects, including accounting.
The main concern with such models is that the accuracy of the output greatly depends on the training data being up to date and of appropriate quality.
While the model will likely provide a correct response when queried about very basic accounting questions, such as what accounting standards govern revenue transactions, it may falter in providing more technical responses.
Consider, for example, a query about a specific accounting treatment of a transaction that requires the application of a recent IFRS standard. The model will draw information from a range of online resources, but it cannot always distinguish credible sources from unreliable ones.
It may also draw from discussions on the draft version of the standard, which may differ from the version that is subsequently issued, leading to an inaccurate response.
Moreover, generative AI systems currently fall short in grasping context and subtle differences, which is critical for reaching accurate conclusions.
These systems cannot match the depth of human understanding or scepticism. The proper application of professional standards and legislation requires these human skills.
Relying solely on AI without human oversight can be risky. Misunderstandings or misapplications in professional settings, especially without human verification, can lead to significant mistakes and potential professional liability repercussions. It is essential to critically assess AI-derived results before taking action.
Data confidentiality
The vast majority of data processed by accountants and other finance professionals is personal and proprietary. It therefore has to be protected under the relevant data-protection legislation.
Once data has been entered into a generative AI system, all control over where that data is shared and stored is lost.
The AI system may have its own data privacy policy, but it will not protect any of its users from liability in the event of a data breach, as the system’s users are still ultimately responsible for that data. Doubtless aware of such possibilities, hackers will likely target AI systems in the future, further increasing the risk.
Protection
Since the current generative AI models, in their own way, can still help accountants and accountancy firms achieve their objectives more efficiently, the use of models cannot simply be squandered. However, appropriate precautions should be taken.
The first step should be to develop clear rules on the use of AI models, such as ensuring that e-mails do not contain confidential information and are not used to conduct initial research.
A distinct rule that should be considered is to ensure that no other policies, laws or regulations are being broken through the use of AI.
This would include confidentiality matters and, for example, inappropriate material. The more detailed and exhaustive the rules are, the less risk of AI misuse happening.
Second, risks could be further reduced by ensuring that all employees at accountancy firms are aware of the AI-use policy and provided with training. Employees must know not only the policy itself but also the limitations of the systems they are using. They need to be aware of the risks previously discussed while also sharing good practices for efficient querying.
A key matter to emphasise in the policy and training is the need to review outputs from the AI model. Supervision of the output should be done as if reviewing the work of any other team member.
A qualitative factor that may help in reviews is to include documentation indicating that AI was used, including the query log. In this way, the reviewer can reconfirm that appropriate scepticism was applied to the AI’s response.
Revisiting engagement-letter terms may also be pivotal in ensuring clarity. Including specific terms in the engagement letter will help avoid unnecessary liabilities for the accountancy firm.
Finally, it is important to stay abreast of changes in generative AI systems themselves and any new or updated legislation concerning them. It is critical to be able to adjust the accountancy firm’s policies.
Despite the speed at which technology is developing, one thing remains clear: these tools are only as effective as the professionals wielding them.
As we push the boundaries of what is possible, the discernment and expertise of professionals will remain paramount.
Given the inherent need for human experience and expertise in our societies, professionals are and will continue to be, the true guardians of financial integrity.
Comments
"AI affects professional liability"