Changing the World with Small Teams

audit and auditor

I have had an email signature for many years which has a cheesy quote at the end. It reads “never doubt that a small group of thoughtful committed people can change the world.” The actual quote is longer than this, it is attributed to Margaret Mead who was an anthropologist, the full version is “Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has. ”

A colleague of mine recently asked me if larger teams was the key to success in a large company. I wondered if this colleague had ever read to the end of one my emails. Were they trolling me?

The core sentiment of the quote is that only small, thoughtful and committed groups of people succeed in making significant change. If you work in a tech company this is important because it applies most of all to the technology disruption around us today. Cloud computing and Artificial Intelligence are changing the face of many industries. Its not the older, larger and established companies who are necessarily leading this change, its often the smaller nimble organizations who have the focus to figure out and lead this disruption.

Quite a few years ago now I founded a small high tech startup that was fairly quickly acquired by Cognos who themselves were acquired a year or so later by IBM. Code I wrote in my basement in West London ended up 10 years later being a core piece of technology in tens of thousands of installations. Large scale tech companies are great for scaling ideas but my most important lesson working in small startups and big corporations was that ideas themselves and solving hard problems is not necessarily about big teams. In fact, its almost never about big teams.

Why is this so?

The first reason is quality over quantity. The adage in the industry is a great developer is three times faster at delivering software than an average developer. While this is true in my experience there is a little more to it. In small teams it is possible to handpick team members with the right mix of talents. With the right people with complimentary skill sets and respectful of each other’s expertise you can create collaborative teams that can easily out pace much larger groups.

Small teams with diverse and complimentary skill sets also foster something called the Medici effect. It relates back to team collaboration. Diversity in thinking and the connection of ideas through close knit face to face communication is often what leads to new innovation.

As teams grow they can impede themselves as a result of having too much overhead in communication. Its very hard to effectively have a discussion with 25 people, let alone 100. This is why effective software teams rarely are this big, and instead are divided into smaller mission focused groups.

The core point is, if you think you need a bigger team to solve a difficult problem, you are most likely wrong. Think again. This type of thought process leads to inaction and if you are in a startup this may result in failure. Sometimes constraints create the best solutions, so keep working at it. Time and again I have seen hard problems solved by small groups, often with simple approaches. My hopeful message to entrepreneurs and startups is not only can you solve hard problems that big companies may not be able to solve but you have the capacity and ability to disrupt entire industries.

Keep thinking you can change the world. Remember *only* small teams can do this.

Interview with Ryan Teeter, University of Pittsburgh

ai audit software

What is your position and what do you teach at your University?

I am Ryan Teeter, I teach Accounting Information Systems in particular, as well as Auditing and Data Analytic at the University of Pittsburg. I am a Clinical Assistance Professor, that just means I am teaching a lot of courses and I am always looking for ways to incorporate technology into my class room and into the projects which I have my students do. A lot of the work we do are very hands on, most of it is what we call experience-based learning. It is focused on getting the students hands on using various accounting and auditing tools, overcoming any challenges with learning a particular tool, and gaining from that experience, to improve their understanding of accounting and auditing.

Which course did you pilot MindBridge Ai Auditor in, and how many students did you have in the class?

I piloted MindBridge in a graduate course on data analytics for accounting. The course is titled Accounting Data Analytics and is part of our Master of Accountancy program at the University of Pittsburgh. We had 28 students this semester and next semester we will be doubling the capacity, so we will have about 50 students participating next time.

It sounds like there is a lot of demand for this course, is it a competitive process to be accepted?

There is cut-off for this program, it is an elective course, there is a lot of demand for it. So that’s why we’ll be increasing capacity in the future forward.

What was your motivation to pilot MindBridge Ai Auditor?

In the Data Analytics course we spend about half of the course teaching fundamental data analytics topics, terminology and foundation. We’re talking about asking the right questions, going through and cleaning up data, data quality issues, particularly how it relates to an audit. We spend a few weeks on different types of models, from classifications to regression to clustering and profiling data and so forth. Next, we move to interpreting the results and generating visualizations for communicating the results of the data analysis to decision makers, management and leadership positions within organizations.

By the time we’ve moved through those fundamentals we have talked about topics like machine learning, different types of risk scores, we have talked about expert models and artificial intelligence. And then, the second half of the course we move into more domain specific topics. We spend a couple of weeks on audit analytics, management accounting analytics, financial statement analysis, and then in the auditing section we’re looking for something more than just the traditional CAATs, computer assisted audit techniques. So, we introduce students to things like double payment checking and fuzzy matching and some of the probabilistic models for outlier detection. By this point however I am really looking for ways to take that to the next level and find a convergence of those different technologies into one place.

I thought that MindBridge was particularly useful for illustrating the different topics we were talking about like Benford’s testing and outlier detection, but also for the concept of discovering the really risky items. So for the platform to set those risk scores, and make it apparent to the auditor as they go through and evaluate ledgers and journals, was an important discovery concept.

After having used MindBridge Ai Auditor in your curriculum, how was your overall experience?

The experience was really good. The software is pretty straightforward aside from some minor issues with importing and running the analytics, meaning just the time that it took to re-evaluate the ledger once we changed some of the risk score items. The students were very satisfied with the program, they liked that they were able to drill into the risky transactions and see exactly what caused some items to be flagged as a high or medium risk. The interface was fairly intuitive.

I would say the only negative is that it’s almost too simple in a sense, because it is so user friendly. You can see the risk scores and see what triggered the scores and then you’re a kind of done. I would like, from an illustrative perspective, to be able to go into a little more depth into the different analysis that are being performed, popping open the hood a little bit to see how this is all working. But otherwise, the students were very satisfied with it and they could see the applicable use of data analytics for the ledger in that particular case.

How was the feedback from your students?

Overwhelmingly the students found it to be eye-opening that they could examine what went into the risk scoring. They liked that they had the control to explore different aspects of the data if they wanted to, so if they wanted to focus more on outlier detection or zero in on individuals or keywords, that the platform enabled them to do so. They liked the flexibility that the platform offered. I think with the cases that were provided they had some clear-cut examples to examine, it would be really interesting to see what they could do with exploring data that was a little more ambitious.

What’s next for you and MindBridge Ai? Will you use Ai Auditor as part of your curriculum again?

I was very pleased with the MindBridge Ai presentations and the illustrative applications of the platform in my Data Analytics course. I really would like to extend it into my undergraduate Accounting Information Systems course as well. We talk about auditing, and audit analytics and risk a bit more in that course, at a basic level. Being able to have something that is straight forward and shows the different techniques while also piquing the undergraduates interest toward data analysis, risk scoring and applied statistics area that would be very useful.

I have a text book written with McGraw Hill on Data Analytics for Accounting which comes out in May. My expectation currently is to add supplement material that I would like to develop for future editions of the text book that may incorporate MindBridge Ai Auditor. It’s all still very preliminary, but for illustrative purposes it’s an  intuitive and wonderful example of applying data analytics in accounting.