You hear a lot of business people talk about their corporate AI strategy. But not many people stop to consider that any AI they have in place already has a strategy of its own.
The fact is, every AI has a strategy. By deploying any type of predictive model, you have set a strategic direction. Whether you realize it or not.
The way an AI learns is a lot like the way a child learns. The AI is set up with a reward mechanism that shapes its development. If you don’t set up that reward structure appropriately, the AI will learn to maximize its reward at the expense of everything else. Hollywood loves cranking out movies based on this premise. While we aren’t talking killer robots yet, it is important to understand that AI does develop a mind of its own.
Much like individuals, every model is a product of heredity, environment, and education. AI develops basic intent and a unique understanding of how to avoid pain and maximize reward. Trying to develop the “Best model” without strategic context is much like trying to hire the “Best employee” without knowing the role you are hiring for. It is as absurd to expect one employee to be the best at all functions, as it is to expect one model to be the best in all business contexts.
The reason why so many AI projects fail is because businesses fail to set a strategic direction for their AI. This often leads to actions that actually destroy business value. At a conference, we ran across a mortgage underwriting company whose executives were very proud that they had set up their AI such that they would never underwrite mortgages that would potentially go bad. The company reasoned that since mortgages that went bad cost them a lot of money, an AI trained to prevent that from happening must be good, right?
Well, no. The problem is, that model had an implied strategy build in: “Never underwrite a bad mortgage.” For a mortgage underwriter, following that strategy will destroy revenue. If you’re going to be so conservative that you avoid failure at all costs, you’ve turned yourself into a risk underwriter that takes no risk. And that will put you out of business.
A better approach is an AI that balances the cost of a mortgage that goes bad versus the benefit from having additional volume and more transactions. Then come up with a model that has the company take on the optimal level of risk. Instead, the strategy the mortgage company unwittingly deployed with their AI was in direct opposition to their overall objectives.
Another way that the strategy deployed with the AI gets disconnected from the overall business strategy is when the AI is trained to focus on the wrong thing. It’s essentially a training error, but it is caused by the AI not having context. We see this very often when AI models are trained based on a sales pipeline. In most sales pipelines, companies have a lot of small deals and a few big deals. Some organizations thrive on small run rate business, while others live and die by their middle and large size opportunities.
Because small deals tend to be the most frequent in any organization, the AI gets very excited about them, because the AI is going to get penalized more frequently for getting small deals wrong. An AI that doesn’t take into consideration the different impact those deals would have will spend a lot of energy trying to figure out the outcome of the small deals and pay less attention to large and medium-sized opportunities.
If your business has many uniformly sized deals, the AI might do well. However, if your business follows an 80-20 rule, where the top 20% of your opportunities deliver 80% percent of the funnel that you close, then the AI will impose a business strategy that’s completely disconnected from what you’re trying to achieve. Unless the AI understands what’s important to you, it will not understand your sales strategy, and it will pay the least attention to the things that matter most.
A better approach is an AI that takes your strategy upfront and determines which strategy is likely to be the most profitable. Deploy a portfolio of models that cover all possible strategies you could pursue, and based on your cost-benefit tradeoffs, pick the strategy that’s most impactful. That way, the AI is trained on your strategic intent.
AI must always start with your intent, the business goals you’re trying to achieve. Because without that, you’re lost. Traditional data science is always “model first,” with business impact a distant second, if it’s considered at all. This worked well for modeling the physical world and phenomena where there is only one objective truth. Business cannot be reduced to a simple universal equation, which is why applying traditional data science methods to business problems is a fundamentally flawed approach. You’re not identifying the strategic outcome you want to achieve and working back from that. You’re saying, “Give me some AI.” That rarely ends well.