How to turn AI failure into AI success

The enterprise is rushing headfirst into AI-driven analytics and processes. However, based on the success rate so far, it appears there will be a steep learning curve before it starts to make noticeable contributions to most data operations.

While positive stories are starting to emerge, the fact remains that most AI projects fail. The reasons vary, but in the end, it comes down to a lack of experience with the technology, which will most certainly improve over time. In the meantime, it might help to examine some of the pain points that lead to AI failure to hopefully flatten out the learning curve and shorten its duration.

AI’s hidden functions

On a fundamental level, says researcher Dan Hendrycks of UC Berkeley, a key problem is that data scientists still lack a clear understanding of how AI works. Speaking to IEEE Spectrum, he notes that much of the decision-making process is still a mystery, so when things don’t work out, it’s difficult to ascertain what went wrong. In general, however, he and other experts note that only a handful of AI limitations are driving many failures.

One of these is brittleness — the tendency for AI to function well when a set pattern is observed, but then fail when the pattern is altered. For instance, most models can identify a school bus pretty well, but not when it is flipped on its side after an accident. At the same time, AIs can quickly “forget” older patterns once they have been trained to spot new ones. Things can also go south when AI’s use of raw logic and number-crunching leads it to conclusions that defy common sense.

Another contributing factor to AI failure is that it represents such a massive shift in the way data is used that most organizations have yet to adapt to it on a cultural level. Mark Montgomery, founder and CEO of AI platform developer KYield, Inc., notes that few organizations have a strong AI champion at the executive level, which allows failure to trickle up from the bottom organically. This, in turn, leads to poor data management at the outset, as well as ill-defined projects that become difficult to operationalize, particularly at scale. Maybe some of the projects that emerge in this fashion will prove successful, but there will be a lot of failure along the way.

Clear goals

To help minimize these issues, enterprises should avoid three key pitfalls, says Bob Friday, vice president and CTO of Juniper’s AI-Driven Enterprise Business. First, don’t go into it with vague ideas about ROI and other key metrics. At the outset of each project, leaders should clearly define both the costs and benefits. Otherwise, you are not developing AI but just playing with a shiny new toy. At the same time, there should be a concerted effort to develop the necessary AI and data management skills to produce successful outcomes. And finally, don’t try to build AI environments in-house. The faster, more reliable way to get up and running is to implement an expertly designed, integrated solution that is both flexible and scalable.

But perhaps the most important thing to keep in mind, says Emerj’s head of research, Daniel Faggella, is that AI is not IT. Instead, it represents a new way of working in the digital sphere, with all-new processes and expectations. A key difference is that while IT is deterministic, AI is probabilistic. This means actions taken in an IT environment are largely predictable, while those in AI aren’t. Consequently, AI requires a lot more care and feeding upfront in the data conditioning phase, and then serious follow-through from qualified teams and leaders to ensure that projects do not go off the rails or can be put back on track quickly if they do.

The enterprise might also benefit from a reassessment of what failure means and how it affects the overall value of its AI deployments. As Dale Carnegie once said, “Discouragement and failure are two of the surest stepping stones to success.”

In other words, the only way to truly fail with AI is to not learn from your mistakes and try, try again.

Source link