Run a Successful ML Pilot Project in 8 Steps: How to Avoid “Pilot Purgatory”
INTRODUCTION (THE TL;DR VERSION)
This is part 2 of our discussion on AI development and implementation. You can find part 1 here.
AI “Pilot Purgatory” describes a functioning AI model that is not put to use within the organization for which it was built. Essentially, an AI model is in “purgatory” if it works properly, yet virtually no one in an organization is using it. This is a serious problem because AI models can significantly reduce costs in businesses where they are implemented.
Here are the TL;DR steps to avoid ending up in pilot purgatory:
- Start with a clear use case and a demonstrated need for AI.
- Check assumptions early and often with users.
- Keep in mind that productionisation of the solution will require taking care of many aspects (see below).
- Start by taking care of only the core of the solution, the part that truly brings value, when building a PoC or MVP.
- Carefully explain to users how the adoption of the solution will be beneficial for them, and show them that is not a threat.
- Make sure that users receive sufficient training.
- Get back to solving all of the technical requirements you’ve planned in step 3.
- Listen carefully to feedback from users, and check user activity and usage patterns.
We can split the root causes of unsuccessful AI projects into two categories. Let’s call one category ‘cultural’ and the other ‘technical’.
The first source of issues is connected with company culture and conducting a project – the problems that might arise here are typical for any change, like starting without a clear use case or sticking to a waterfall approach (lack of flexibility). With all of the hype currently surrounding AI, it is often forgotten that (1) preparing and maintaining a solution brings a lot of challenges, and (2) there should be a legitimate use case for the company.
AI and DS projects can be hard to estimate in a traditional sense, primarily in terms of cost and timeline. This often discourages management from getting involved. When I was working for the Big 4 Consultancies I was required to provide exact estimates of the workload needed to finish a project or a project module. The managers needed this to fill their spreadsheets, budgets, and reports – or they had already agreed to deadlines with clients. They were not very interested in whether such a prediction was going to be inaccurate. Of course having the final goal in mind and overall plan is crucial, but AI projects need to be built with the correct approach. Delivering such a project requires a learning path – about the data, client requirements, technical solutions, and so on. If you stick to the level of knowledge that you had at the beginning of the road, then the end result will be disappointing. Adjusting tasks and estimates along the route is a must. Nevertheless, it is not easy to mitigate the need to control every aspect of a project, thus many AI initiatives might be terminated before they even get started due to overall unpredictability.
The goal for management is usually to deliver value for the client as quickly as possible. So the solution for non-agile management cases is to proceed with small, more predictable steps, starting with the Proof of Concept (PoC) and Minimum Viable Product (MVP). Such an approach will not only make management feel safer about the planning part but it will also quickly make a positive impression on the client. This allows for easier acquisition of resources for continued development. This is the path of maximum value and minimum wasted time…
Further, the overall outcome in terms of benefit of AI and DS projects is often hard to estimate. There is a lot of research involved, hypotheses to be tested, and improvements to be implemented. Only rarely can it be accurately claimed upfront: “this project will take 65 man-days and the result will be a 13% increase in sales.” No. But this is the beauty of data science: you can be surprised by how it boosts your business. Some projects might have a minimal effect, and others might have a dramatic positive effect. That is why the PoC and MVP concepts are so crucial for the inception of a project. Nevertheless it is often difficult for management to accept investments with uncertain ROI.
Due to the dynamic characteristic of AI projects, it is crucial to employ an agile workflow in close collaboration with the client to adjust quickly and deliver maximum value. The point is to organize the project in such a way that something useful and valuable is produced during each week of production. Working the other way around may be why many projects end up in purgatory: when after a few months or a couple of iterations there is still no value delivered by the AI, then the project may be shut down.
Crucially, when an AI solution is implemented within a company, it might not be accepted by the employees like any other standard business change. As human beings, we generally prefer the status quo. With AI changes, as they are often misunderstood, adoption may be completely avoided if people are worried that the machines will “steal” their jobs. That is why I prefer to speak about AI “augmentation,” not “automation”: the implemented solution simplifies the work, makes it faster, and brings new insights, but the human is still the crucial part of the decision making process. In short: AI amplifies human expertise. Only dull, repetitive and low-stakes tasks are done automatically by the machines. When AI is presented in such a manner, as a helper, it can be accepted and implemented easier.
There is a second challenge with implementing AI solutions. Building a good model is one thing, but productionisation (i.e., making the model useful for the business) is a different story. The whole workflow must be set correctly. From feeding and updating data, through the correct, tested, reliable model response (sometimes also the speed of the response is crucial!) to a bug-free user interface. The typical scenario involves integrating many of the already existing tools into a smooth pipeline. Scalability (i.e., dealing with many concurrent users) may also be an issue. Then there is a whole deployment strategy to be implemented: internal servers or cloud systems, which machines to use, how to assure that the programmatic environment is the same as that which is used locally, etc. Setting access rights and logging user activity, securing data, and acting on results and insights must also be taken care of. Not to mention the process of maintaining the code base, bug fixing, testing, releasing updates and setting a common environment for the developers. All of these aspects are crucial parts of the system and require a great deal of knowledge and experience to be properly delivered.
However, you shouldn’t start a project by worrying about all of the above problems. First, focus on proving that the AI implementation is useful and can bring significant value to your organization. The worst thing that can be done is to prepare the whole infrastructure just to find out that no one is truly interested in the product. Then you’ll surely end up in purgatory.
Thanks for reading! For more, follow me on Linkedin and Twitter. You can also check out my other articles: Forget about Excel, Use these R/Shiny Packages Instead and Super Solutions for Shiny Architecture.
Follow Appsilon Data Science on Social Media
- Follow @Appsilon on Twitter
- Follow Appsilon on LinkedIn
- Sign up for our company newsletter
- Try out our R Shiny open source packages
- Sign up for the AI for Good newsletter
- We are hiring software engineers