Building trust in artificial intelligence for audit

Blog post header; blue cubes moving on a conveyor belt towards a green gate.

With the advent and growth of artificial intelligence (AI) in audit, the topic of trust comes up repeatedly in discussions. Auditors have always relied on their credibility to build and maintain relationships with clients. Auditors must build their own confidence in AI technologies before convincing clients and regulators that they have achieved the same level of assurance, if not more, than traditional methods.

At MindBridge, we have been asking ourselves for some time: How can we build this confidence for our customers?

To support auditors in their assessment of AI as a viable option, we commissioned a third-party audit of the algorithms used in our risk discovery platform. This independent assessment by UCLC (University College London Consultants) is an industry first, providing a high level of transparency to any user of MindBridge technology and assurance that our AI algorithms operate as expected.

While the independent report is only available to customers, we’ll summarize the activities and results here.

Ethical AI and MindBridge

AI and machine learning (ML) are the most influential and transformative technologies of our time, leading to legitimate questions around the creation and application of these systems. Will AI-based algorithms influence potentially life-altering decisions? How are these systems secured? Are audit firms required to prove the credibility of their AI tools?

The ethics of AI sees continual press and social media coverage because the technology shapes how we interact with the world, and defines how aspects of the world interact with us.

“AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.”

The biggest companies, from Amazon to Google to Microsoft, recognize the ethical issues that arise from the collection, analysis, and use of massive sets of data. The ICAEW, in its “Understanding the impact of technology in audit and finance” paper, says that it is “crucial for the regulators to develop their capabilities to be in a position to effectively regulate these sectors in the face of advances in technology.”

MindBridge has long realized that transparency and explainability are critical for the safe and effective adoption of AI, with a demonstrated commitment to the ethical development of our technology.

Why third-party validation matters

“80% of respondents say auditors should use bigger samples and more sophisticated technologies for data gathering and analysis in their day-to-day work. Nearly half say auditors should perform a deeper analysis in the areas they already cover.”

Audit 2025: The Future is Now, Forbes Insights/KPMG

The most significant difference between traditional audit approaches and an AI-based one is in the effectiveness of the auditors’ time. An AI audit analytics platform can search 100% of a client’s financial data such that auditors can avoid large samples in low risk areas, focusing their time instead on areas of high judgement and audit risk. 

Due to the increased effectiveness of AI-based tools, regulators, audit firms, and their clients now consider data analytics an essential part of the industry’s business operations. With such a widespread impact, no one should blindly trust technology that has the potential for misinterpretation or misuse.

Auditors are known for assessing risk and gaining reasonable assurance. AI is just another example where skilled, third-party technology validation must be performed.

How the audit of MindBridge algorithms was achieved

The third-party audit of MindBridge algorithms was performed by UCLC, a leading provider of academic consultancy services supported by the prestigious UCL (University College London). UCLC’s knowledge base draws from over 6,500 academic and research staff covering a broad range of disciplines, and includes clients from international organizations, multinational enterprises, and all levels of government. UCL’s reputation as a world leader in artificial intelligence meant they were the right partner for MindBridge in completing this audit.

The goal of the audit was to verify that the algorithms used by our automated risk discovery platform are sufficient in three areas:

  1. The algorithms work as designed
  2. What the algorithms do while operating
  3. Sufficiency of MindBridge processes with regards to algorithm performance review, the implementation of new algorithms, and algorithm test coverage

For auditors and the accounting industry, the importance of this type of report cannot be understated:

“Auditors have to document their approach to risk assessment in a way that meets the auditing standards. They are also required to clearly document conclusions they make over the populations of transactions considered through the course of audit. Where this analysis is being conducted through MindBridge, there is therefore a burden of proof to demonstrate that the software is acting within the parameters understood by the auditor.”

– 3rd Party Conformity Review (Algorithm Audit) for MindBridge, UCLC

The first step was for the UCLC auditors to identify potential risks in the MindBridge algorithms across four sets of criteria:

Robustness of the algorithm

Split into correctness and resilience, this set of criteria validates that the algorithms perform as expected, react well to change, and are documented well. Generally, these are rated against the ability of the algorithm to score risk properly and work properly across a range of inputs and situations.

For example, many of the techniques employed by MindBridge audit analytics are used to identify unusual financial patterns, such as those required by the ISA 240 standard. One set of these techniques falls under “outlier detection,” a form of ML that doesn’t require pre-labelled training data and as such, reduces the potential to bring bias into the analysis.

The limitation of this unsupervised ML is that it has no deep or specific knowledge of accounting practices. MindBridge adopts the concept of an ensemble, or augmenting with different techniques, to bring domain expertise into the analysis. Called Expert Score, this allows the analysis to identify the relative risk of unusual patterns by combining human expert understanding of business processes and financial monetary flows with the outlier detection.

AI explainability

Split into documentation and interface components, this set of criteria validates that the algorithms and their purposes are easy to understand for users. This is critical for financial auditors in providing clear and meaningful expectations and key to building confidence in their clients.

“When making use of third-party tools, audit trails are vital, and auditors should ensure that they are able to obtain from providers clear explanations of a tools function, including how it manipulates input data to generate insights, so that they are able to document an audit trail as robust as one that could be created with any internally developed tool.”

Response to Technological Resources: Using technology to enhance audit quality, Financial Reporting Council

Privacy

These criteria apply to the effectiveness of controls relevant to the security, availability, and processing integrity of the system used to process client information. Privacy is closely linked to the prevention of harm, whether financial or reputational. AI systems must guarantee privacy and data protection throughout the lifecycle and be measured against the potential for malicious attacks.

Bias and discrimination

This applies to the effectiveness of controls in place to prevent unfairness in AI-based decision making. As MindBridge algorithms don’t use or impact data from identifiable individuals, this category presents limited risk.

Methodology

The assessment was done by conducting numerous tests and research to grade performance along a scale from “faulty” to “working as intended or passed the tests” for the sets of criteria above. This included:

  • Using different settings and data as input into the AI algorithms and recording the results
  • Comparing the results of validation code against the results of the AI platform code
  • Conducting interviews with the CTO and key data science, software development, and infrastructure personnel to determine processes and controls for systems development, operationalization, security, and testing
  • Assessing the data science and software development expertise of the MindBridge team

The UCLC auditors were granted a level 7, or “Glass-box,” access to the algorithms. This is the most transparent level available to an assessment and allowed the audit to cover all details of the algorithms.

Graph showing 7 levels for information concealed versus feedback detail trade-off curve

All MindBridge algorithms passed the assessment and the auditor’s report is available upon request to customers, regulators, and others who must rely on our algorithms.

Conclusion

With completion of the independent, third-party audit of its algorithms, MindBridge demonstrates clear evidence for AI-based tools to support the financial audit process safely and effectively. Through this assessment, MindBridge further enables the audit of the future by helping firms build confidence with their clients on the value of making AI and audit analytics an essential part of business operations.

For auditors, this announcement makes it easier to place further reliance on the results of the MindBridge artificial intelligence, allowing auditors to sample fewer items and spend more time where it matters most. It’s a key stepping stone in building credibility for AI in audit, and we hope that such third-party algorithm audits become the standard across the sector.

To learn about MindBridge’s most recent verification journey with Holistic AI visit our blog.

For more information on how AI and automated risk discovery supports your firm, download this free eBook now:

Automated risk discovery: What is it, and how firms can achieve it

ISA 315 revised: What it means for risk assessment procedures, and data analytics

Two characters discuss the benefits of data analytics in light of ISA 315 revisions.

ISA 315 (revised) and Data Analytics: Risk assessment procedures reimagined

The revised standard has been published as of December 2020, and you might be wondering what impact it has on your firm’s risk assessment procedures and how you can address the requirements. There are many useful sources of information on the changes, notably the IAASB’s Introduction to ISA 315. IFAC also published a helpful flowchart for ISA 315 during the work programme, which walks through the various steps required to assess risk of material misstatement.

There are a number of improvements to the standard, including an enhanced focus on controls (particularly IT controls), stronger requirements on exercising professional scepticism and documentation, and considerations around the use of data analytics for risk assessment. The new standard comes into effect from 15th December 2021, so now is the time to start planning how you will address the changes in your audit. Below we discuss some key considerations on how analytics can support a strong risk assessment.

A chart explaining risk assessment and data analytics as part of the ISA 315 revision by IFAC.

Credit: https://www.ifac.org/system/files/publications/files/IAASB-Introduction-to-ISA-315.pdf

So how can data analytics support your risk assessment according to ISA 315? The areas identified above in red show the different procedures that can be supported by the use of these techniques. A key element of the revised standard is that this should be an iterative process conducted throughout the audit. This means using data analytics tools that can be easily refreshed with the latest information will better support this requirement than more traditional approaches.

Identifying risks of material misstatement at the financial statement level

Data analytics can support the risk assessment procedures laid out in ISA 315 by analysing previous and current accounting data to the financial statement level. This allows the auditor to see the material balances in the accounts, and if machine learning is applied, where the concentration of risky transactions lies. This is where the knowledge gained in the blue boxes above can be brought to bear. Comparing understanding gained through observation to the data is a powerful way to sense check and identify areas for further investigation.

Identifying risks of material misstatement at the assertion level

Specific analyses can target assertion risks and show where there are particular problems with an assertion. To do so effectively, several different analytics tests can be applied and combined to develop a good indicator of an assertion risk, for example accuracy. These can then be applied in an automatic way to give the auditor the information needed for their risk assessment.

Determine significant classes of transactions, account balances or disclosures (COTABD)

Combining assertion analytics with the ability to profile similar transactions can help auditors identify significant classes of transactions or balances. Analytics can help to produce similarity scores, but also to identify sets of transactions that are unusual. This can indicate previously unknown business processes that may require a separate assessment of their control environment.

Assess inherent risk by assessing likelihood and magnitude

Following identification of risk, the audit can guide their assessment by understanding the level of unusualness. Data analytics can provide finer grain evaluations of risk rather than simply risky or not. This can help support assessments aligned with the spectrum of inherent risk as defined in the standard.

Assess control risk

Data analytics such as process mining or automated testing of segregation of duties can help to inform or test control risk. These analytics can provide more comfort around the controls risk assessment and help to identify deviations in the control environment that require further examination.

Material but not significant COTABD

Where COTABD has been determined as material but not significant, recurring analytics can ensure that this assessment remains valid. Anomaly detection methods can be particularly helpful here, allowing the auditor to regularly check that nothing unusual has occurred since the initial assessment was undertaken.

Next Steps: ISA 315 and Data Analytics

Audit methodologies will need to reflect the revised workflow, with particular emphasis on the iterative nature of the risk assessment and ensuring that auditors are prompted to exercise professional scepticism and document it at every stage. Data analytics can help to ensure that the information used to continuously conduct risk assessment is timely, appropriate and relevant.

These improvements to the standard will result in a stronger audit approach and an advancement towards industry adaption data and analytics technologies. With AI audit software, accountants and auditors can gain deeper insights into their client’s financial data, in less time. Overall, the audit software can increase the efficiency of their processes, so they can focus on delivering better results, in time for the ISA 315 (revised) December 15th, 2021 deadline. 

Want to learn more about the benefits of AI auditing software? Read our article on “Assessing audit risk during engagements” to learn more. 

Want to learn more about how auditors are using AI?