AI Pilot Purgatory and How to Avoid It

Reading time:
time
min
By:
Joe Cha
January 8, 2020

<h2>## TL; DR</h2> “How to Avoid AI Pilot Purgatory Cheat Sheet” Following best practices →  increased likelihood of production of useful tools <ul><li>PoC and MVP approaches prevent use-case disconnect</li><li><b>Solid Productionisation</b> (setting access rights, logging the users’ activity, data governance, maintaining the code base, scalability, releasing updates and setting a common environment for the developers, etc.) is a<b> key differentiator</b></li><li>Maintenance of the model is equally as challenging as researching the model.</li><li>Preparing responses for common problems, such as concept drift, model bias, resistance to adoption by teams.</li><li>Adding computational thinking to the business organization → improved use case discovery and increased adoption of the solution</li><li>AI/ML projects are risky, some will fail, BUT the upside is real</li></ul> <h2>## On AI Pilot Purgatory (and How to Avoid It)</h2> <blockquote><em>Merriam-Webster defines “purgatory” as <b>“</b>a place or state of temporary suffering or misery.”</em></blockquote> In the McKinsey<a href="https://www.mckinsey.com/featured-insights/artificial-intelligence/ai-adoption-advances-but-foundational-barriers-remain"> study</a> entitled “AI adoption advances, but foundational barriers remain” by Michael Chuii and Sankalp Malhotra, one paragraph stood out for me: <blockquote><i>While most companies have already deployed AI to some extent, few have embedded it into standard operating processes in multiple business units or functions, and about </i><i>one-third are </i><b><i>only piloting</i></b><i> the use of AI</i><i>. While AI is still in its early days, </i><i>getting stuck in “pilot purgatory” is a real risk</i><i>.</i></blockquote> I imagine it must be extremely frustrating to a company or other organization that invested in building and deploying an AI model only to watch it languish in “AI pilot purgatory,” where at best a handful of people use it, and the dreams of scale are not realized. There is a great deal of buzzword-filled hype about artificial intelligence in the media. The fact of the matter is that if it’s properly built, deployed and maintained, AI does offer considerable value and competitive advantage. But if an AI model is not properly built, deployed, or maintained, there is a real risk that an organization will end up with a lonely AI sitting in their servers or in the cloud, the result of big dreams and good intentions, helping no one, or even worse -- hurting people. Is the adoption of AI solutions an issue? How can one prevent “AI Pilot Purgatory”? Actually, introducing <b>any</b> type of major initiative to a company can be challenging, not just AI projects. What are the similarities between introducing a more common type of major initiative and introducing an AI model to an organization? I consulted some local experts about this issue: <a href="https://www.linkedin.com/in/filipstachura/">Filip Stachura</a> and <a href="https://www.linkedin.com/in/marcin-dubel-b5a3b5a9/">Marcin Dubel</a> (in a future post). https://www.instagram.com/p/B7BhmZzlLeb/?igshid=b2sh6txw0i5m <ol><li>You need different skills in the team to create the model (research) and to take care of it (data engineers / IT). Maintenance is as hard as research, especially if you have events like data drift.</li><li>Even if you get good business results, <b>productionization/operationalization of models is hard</b>! You need to plug models into your data environment and make sure your model won't do anything unexpected in production <b>and </b>prepare for a moment it will. Here is an <a href="https://www.forbes.com/sites/mzhang/2015/07/01/google-photos-tags-two-african-americans-as-gorillas-through-facial-recognition-software/#3d35d997713d">example</a>.</li><li>Data going into the model needs to be clean. This is a big problem. Data scientists / Researchers clean their data, but very often this is a one-time event. You need proper <b>data governance</b> to make data clean for an AI model in production.</li><li>Current employees can be afraid of losing their jobs and hamper adoption</li><li>Current employees can be truly less relevant - how are you going to help them to adapt?</li><li>The world can change and one day the model can start to behave badly (This is called <a href="https://en.wikipedia.org/wiki/Concept_drift"><b>concept drift</b></a>. Such a situation is common but easily mitigated with planning).</li><li>Who is going to validate the results of the model during initial tests and later (on a daily basis)?</li><li>How is the model going to be validated?  Validate the model after you train it and see how it performs in the business environment.  You must plan to measure the impact of the model.</li><li><b>AI/ML projects are risky. </b>To some extent this is normal that some of the hypotheses are going to fail (e.g. technology is too expensive, or results are not satisfying for business)</li></ol> <h2>Discussion</h2> Interesting to note that it seems that many of the challenges listed above are more human than technological in nature.  Professor Mihnea Moldoveanu from the University of Toronto puts it well in “<a href="https://hbr.org/2019/03/why-ai-underperforms-and-what-companies-can-do-about-it">Why AI Underperforms and What Companies Can Do About It</a>”: <blockquote>“Why is the gap between companies’ AI ambition and their actual adoption so large? <b>The answer is not primarily technical.</b> It is organizational and cultural. A massive skills and language gap has emerged between key organizational decision makers and their AI teams… And it is growing, not shrinking.”</blockquote> Filip mentions in point #1 above that you most probably need different people to create the AI model and maintain it. It’s a matter of putting the right people in the right place to keep your AI up and running (again, an organizational, rather than technological scenario). Point #4 refers to the practice of strategizing and executing on the collection and validation of data in your organization. For examples of data governance in action, you can read <a href="https://appsilon.com/author/pawel/">Paweł Przytuła</a>’s article “<a href="https://appsilon.com/data-quality/">Data Quality Case Studies: How We Saved Clients Real Money Thanks to Data Validation</a>.” Filip mentions “productionisation” in point #2 above. We talk about productionisation a lot in our blogposts, such as <a href="https://appsilon.com/ai-transformation-of-insurance/">here</a> and <a href="https://appsilon.com/decision-support-systems-4-how-to-implement-an-ia-solution/">here</a>. Productionisation is basically support and planning by humans to ensure AI model success. Someday there may be AI models that are self-aware and independent, but we are still a long way off from that. Here is a summary of what we mean: <b>Data validation</b>. The initial dataset that you use to train the AI model is important, but this isn’t a one-time event. Typically we receive new data as time goes on, and we want to update the model with the new data to teach it model new things. Automated data verification is therefore necessary, so that we can immediately get alerted if there are  problems with the data. <b>Reproducibility.</b> We have to ensure that the model that we develop in our workshop works exactly the same in production for the client. With multiple dependencies and version control, rebuilding and maintaining the model environment correctly is crucial. <b>Interpretability</b>. Our clients need to be prepared to explain their decisions to the users of their products.  Recommendations from a model are only helpful if the decision making process is explainable. <b>Reliability and scaling</b>. When putting a model into production we need a plan for how it will scale. We need to plan for events like spikes in usage or server outage. <b>Human augmentation and oversight</b>. We shouldn’t think of AI as a solution that does everything for you. Instead, think of it as IA — intelligence augmentation, and design how humans can work together with the model and oversee its operation. <b>Automated model update</b>. For successful implementation of an AI model, it needs to learn new things based on new data. This should be fully automated, and not a manual process. <b>User interface.  </b>It’s important to match your state-of-the-art AI model with a user interface that the end-users can employ in their tasks without friction. Depending on the use case, this can be an API called by other systems or a<a href="https://appsilon.com/how-we-built-a-shiny-app-for-700-users/"> human-friendly Shiny dashboard</a>. There are several challenges that are common enough that you might as well plan for them (bias, rejection of adoption by employees, data problems). Another <a href="https://www.mckinsey.com/featured-insights/artificial-intelligence/global-ai-survey-ai-proves-its-worth-but-few-scale-impact?cid=soc-web">McKinsey survey</a> indicates that “AI Power Users” often acknowledge these risks and challenges, and plan for them. <h2>## The future is bright, but challenges remain</h2> Implementing AI is risky, complex and challenging, but the potential upside of successful AI adoption is significant. It is certainly worth investing in best practices to ensure success. The same article from McKinsey that alarmed us with the quote about “AI pilot purgatory” also gave us the following quotes: <blockquote><i>Although many AI projects languish in purgatory, </i><i>a majority of executives whose companies have adopted AI report that it has provided an uptick in revenue in the business areas where it is used, and 44 percent say AI has reduced costs.</i></blockquote> And the growth of AI adoption continues… <blockquote><i>47% of respondents </i>[of their 2019 survey]<i> say their companies have embedded at least one AI capability in their business processes—compared with 20 percent of respondents in a 2017 study who said their companies were using AI in a core part of their business.</i></blockquote> What is your take on “AI Pilot Purgatory”? Please add your thoughts in the comments below. <i> </i> Thanks for reading. Follow me on Twitter <a href="https://twitter.com/_joecha_">@_joecha_</a>. <h2><b>Follow Appsilon Data Science on Social Media</b></h2><ul><li>Follow <a href="https://twitter.com/appsilon">@Appsilon</a> on Twitter</li><li>Follow us on <a href="https://www.linkedin.com/company/appsilon">LinkedIn</a></li><li>Sign up for our company <a href="https://appsilon.com/blog/">newsletter</a></li><li>Try out our R Shiny <a href="https://appsilon.com/opensource/">open source</a> packages</li><li>Sign up for the AI for Good <a href="https://appsilon.com/ai-for-good/">newsletter</a></li></ul>

Have questions or insights?

Engage with experts, share ideas and take your data journey to the next level!

Is Your Software GxP Compliant?

Download a checklist designed for clinical managers in data departments to make sure that software meets requirements for FDA and EMA submissions.
Explore Possibilities

Share Your Data Goals with Us

From advanced analytics to platform development and pharma consulting, we craft solutions tailored to your needs.

Talk to our Experts
community
ai&research