Trees -- Meet Forest.
strategy digital-transformation machine-learning ai
I have met and worked with dozens of companies that hope to use machine learning in a meaningful way to improve their business operations. Almost all of them are lost in the details of the science experiment of getting the technology to work. And this misses the key larger issue – the need to conduct business experiments on how to get this technology to work to improve business operations.
In a recent KPMG study on business use of AI (AI And Machine Learning: Hesitation Turns To High Hopes) the authors report that in the next three years, 40 percent of executives expect to increase their AI investments by 20 percent or more and 32 percent will increase robotic process automation (RPA) investments by 20 percent or more… $232 billion in spending expected by 2025.
But how will companies spend all this money? Thomas Davenport and Randy Bean, writing for MIT Sloan Management Review (The Problem With AI Pilots) make a number of observations that resonate with my own experiences. In one they reference a 2017 Deloitte survey and summarize: “New machine learning models may have to be written as APIs or as program code modules within existing systems. Even RPA systems, which are quite easy to implement in small volumes, can become an architectural challenge when adopted in large numbers.” They end the article with a set of practical suggestions that I think point to the same larger issue that I see – that companies are spending too much of their time and money on getting the technology to work and not enough on experimenting with how the technology should be deployed to make improve the business.
In our own company, we are finding dozens of places to deploy machine learning so that we can focus on the business experiments – understand when and where and how to deploy this technology to positively impact business outcomes. One example: an experiment in our recruiting process – every time someone applies for a job with Catalytic we process the application automatically through a machine learning prediction algorithm and generate a score for the likelihood that this individual candidate will move forward to the first round of screening (a phone interview).
A company trying to conduct this experiment could get very caught up in the details of the technology questions - how do you capture the information? What information do you use in the analysis? How should the algorithm be designed? Where will this process run and how does it integrate with existing systems and processes?
But the much more important question that we wanted to ask is the business question – should these predictions be shown to the person screening the candidates or will it inappropriately influence a decision? At what point and under what conditions is a prediction strong enough to eliminate the candidate without a human look at the application?
So how do we avoid getting lost in the technology? To answer this let me return to the three areas of improvement commonly cited as accelerating the development of machine learning: algorithms, data availability, and processing power.
-
Algorithm Quality and Applicability – applying the right machine learning strategies and tuning parameters to get the best performance can consume an enormous part of an investment budget. But how much does it matter? At Catalytic we have postulated that automated tuning can provide a “good enough” approach to eliminate this as a blocking issue for pragmatic business experiments. The initial prediction model is built automatically – truly a four-step configuration “wizard” that takes a business user five minutes to create a model ready to drop into a business process. Once the algorithm has been running for some time, parameter tuning and algorithm selection can be further analyzed - by evaluating the real results of different system configurations. We make this a part of a business experiment that the business users can perform.
-
Data Quantity and Consistency – most businesses, even for processes that are run frequently, have not collected and stored adequate information about those processes (and their outcomes) to have a useful training data set for machine learning. This can become a second blocking issue, requiring a huge data collection, cleaning, and extraction/selection process before any useful machine learning can be done. By using a platform like Catalytic to automate the process, the clean data and outcomes begin to be collected with every new run of the process. While initial predictions are very poor with limited data, each process run generates a new data element that can be added and the model recalculated with prediction “confidence” improving over time. Rather than allowing this to be a blocker, we simply report prediction and prediction confidence allowing the business to assess how to use the output of the algorithm as it improves.
-
Scalable Processing Power – setting up the computation environment itself can be a daunting technical task for a corporation with little existing expertise in the processing requirements for machine learning. By utilizing a scalable cloud infrastructure (Catalytic uses AWS) we have eliminated this as a challenge in getting started with machine learning. Compute resources are by default configured to support all of our customers and can scale with increasing demand. By aggregating the usage across many different companies (and now globally, in every time zone) we deliver an economy of scale and an on-demand infrastructure that eliminates this third potential technology blocker.
The “forest” here is the pragmatic application of AI in the enterprise to improve business operations. At Catalytic we are helping companies avoid getting lost in the “trees” of the technical barriers to adoption of machine learning and helping them move more quickly into a future where people, bots, and AI all work together.