Interview with Ryan Teeter, University of Pittsburgh

ai audit software

What is your position and what do you teach at your University?

I am Ryan Teeter, I teach Accounting Information Systems in particular, as well as Auditing and Data Analytic at the University of Pittsburg. I am a Clinical Assistance Professor, that just means I am teaching a lot of courses and I am always looking for ways to incorporate technology into my class room and into the projects which I have my students do. A lot of the work we do are very hands on, most of it is what we call experience-based learning. It is focused on getting the students hands on using various accounting and auditing tools, overcoming any challenges with learning a particular tool, and gaining from that experience, to improve their understanding of accounting and auditing.

Which course did you pilot MindBridge Ai Auditor in, and how many students did you have in the class?

I piloted MindBridge in a graduate course on data analytics for accounting. The course is titled Accounting Data Analytics and is part of our Master of Accountancy program at the University of Pittsburgh. We had 28 students this semester and next semester we will be doubling the capacity, so we will have about 50 students participating next time.

It sounds like there is a lot of demand for this course, is it a competitive process to be accepted?

There is cut-off for this program, it is an elective course, there is a lot of demand for it. So that’s why we’ll be increasing capacity in the future forward.

What was your motivation to pilot MindBridge Ai Auditor?

In the Data Analytics course we spend about half of the course teaching fundamental data analytics topics, terminology and foundation. We’re talking about asking the right questions, going through and cleaning up data, data quality issues, particularly how it relates to an audit. We spend a few weeks on different types of models, from classifications to regression to clustering and profiling data and so forth. Next, we move to interpreting the results and generating visualizations for communicating the results of the data analysis to decision makers, management and leadership positions within organizations.

By the time we’ve moved through those fundamentals we have talked about topics like machine learning, different types of risk scores, we have talked about expert models and artificial intelligence. And then, the second half of the course we move into more domain specific topics. We spend a couple of weeks on audit analytics, management accounting analytics, financial statement analysis, and then in the auditing section we’re looking for something more than just the traditional CAATs, computer assisted audit techniques. So, we introduce students to things like double payment checking and fuzzy matching and some of the probabilistic models for outlier detection. By this point however I am really looking for ways to take that to the next level and find a convergence of those different technologies into one place.

I thought that MindBridge was particularly useful for illustrating the different topics we were talking about like Benford’s testing and outlier detection, but also for the concept of discovering the really risky items. So for the platform to set those risk scores, and make it apparent to the auditor as they go through and evaluate ledgers and journals, was an important discovery concept.

After having used MindBridge Ai Auditor in your curriculum, how was your overall experience?

The experience was really good. The software is pretty straightforward aside from some minor issues with importing and running the analytics, meaning just the time that it took to re-evaluate the ledger once we changed some of the risk score items. The students were very satisfied with the program, they liked that they were able to drill into the risky transactions and see exactly what caused some items to be flagged as a high or medium risk. The interface was fairly intuitive.

I would say the only negative is that it’s almost too simple in a sense, because it is so user friendly. You can see the risk scores and see what triggered the scores and then you’re a kind of done. I would like, from an illustrative perspective, to be able to go into a little more depth into the different analysis that are being performed, popping open the hood a little bit to see how this is all working. But otherwise, the students were very satisfied with it and they could see the applicable use of data analytics for the ledger in that particular case.

How was the feedback from your students?

Overwhelmingly the students found it to be eye-opening that they could examine what went into the risk scoring. They liked that they had the control to explore different aspects of the data if they wanted to, so if they wanted to focus more on outlier detection or zero in on individuals or keywords, that the platform enabled them to do so. They liked the flexibility that the platform offered. I think with the cases that were provided they had some clear-cut examples to examine, it would be really interesting to see what they could do with exploring data that was a little more ambitious.

What’s next for you and MindBridge Ai? Will you use Ai Auditor as part of your curriculum again?

I was very pleased with the MindBridge Ai presentations and the illustrative applications of the platform in my Data Analytics course. I really would like to extend it into my undergraduate Accounting Information Systems course as well. We talk about auditing, and audit analytics and risk a bit more in that course, at a basic level. Being able to have something that is straight forward and shows the different techniques while also piquing the undergraduates interest toward data analysis, risk scoring and applied statistics area that would be very useful.

I have a text book written with McGraw Hill on Data Analytics for Accounting which comes out in May. My expectation currently is to add supplement material that I would like to develop for future editions of the text book that may incorporate MindBridge Ai Auditor. It’s all still very preliminary, but for illustrative purposes it’s an  intuitive and wonderful example of applying data analytics in accounting.

Statistical sampling- the intelligent way

audit and artificial intelligence

Auditors love statistical sampling and so does the MindBridge Ai team. Why wouldn’t we—statistical sampling uses the laws of probability to measure sampling risk. We truly believe that statistical sampling largely outperforms judgment sampling.

Don’t get us wrong—the experience of a Partner is undisputed and their ability to focus the audit and identify areas of risk in financial statements is the key to a successful audit and, of course, the peace of mind once an audit opinion is issued. However, once the risk areas are identified and a population of thousands (if not millions) of transactions remains to be reviewed, experience alone might not be sufficient to select a test set.

Statistical sampling and the laws of probability make it feasible to pull a test set from a huge population so an audit team can use it to give reasonable assurance that the overall population is free from material misstatements.

Can audit teams give the same reasonable assurance and test fewer transactions?

There are many commonly used methods to enhance statistical sampling in the audit profession. For example, Monetary Unit Sampling is one (very, very popular) way that a test set can enrich the items most likely to be of interest to auditors. In Monetary Unit Sampling, every dollar is regarded as a distinct unit and given equal chance of being in a transaction picked for testing. Therefore, the higher the amount of a transaction is, the more likely it will be part of the test set. In an audit context, this is great as larger transactions, the ones that are more likely to cause a material misstatement, are more likely to be picked for testing.

How did MindBridge come up with an improved sampling methodology?

First, let’s clarify what improved means. For us improved means to enrich data in a meaningful way so it makes the whole testing process more efficient. Similar to Monetary Unit Sampling, we enrich a data set with additional information to make it more likely that transactions get sampled that are of audit interest.

The intelligent sampler

We run a number of tests to calculate a risk score for every single transaction in a data set. This risk score is the basis of our intelligent sampler which pulls a stratified sample set across the whole population. As with Monetary Unit Sampling, every transaction has a chance for being selected for field testing; however, the ones more likely to be of audit interest having a higher chance.

What does “of audit interest” mean?

“Of audit interest” means the likelihood that a transaction is misstated due to fraud or error. In a number of field experiments, the Ai Auditor™ has proven its ability to identify and push fraud- and error-prone transactions to the top of our risk ranking.

This ability of our Ai Auditor™ to identify transactions which are more likely to be fraudulent gives audit teams the ability to reach the desired confidence level with a smaller test set of transactions. MindBridge’s intelligent sampler is a focusing tool that helps the auditor identify audit areas that they should spend the most time on, but at the same time makes sure they also spend enough time in other areas beyond the high-risk transactions.

Comparing statistical random sampling with intelligent sampling

Our data science team created a general journal for educational purposes. This journal consists of 2966 transactions of which 24 transactions are non-compliant. The non-compliance rate in this file is 0.8%.

Assuming an acceptable error rate of 1%, it would be required to test a sample set of 230 transactions to achieve a 90% confidence utilizing statistical random sampling methodology.

Our intelligent sampler takes an equal number of samples from the Low and Medium/High-risk categories. This ensures that every transaction has a chance of being selected in the sample, but those with higher risk have a higher chance. Using this method the auditor has a 90% chance of discovering non-compliance in the file with just testing a sample set of 14 transactions.

Approach Confidence level / error rate Required Sample Size
Statistical Random Sampling 90%/1% 230
Intelligent Sampling 90%/1% 14

 

If you’re interested to learn more about MindBridge™ and our intelligent sampler, or would like to run your own A/B test,  please get in touch.