3 keys to AI transformation in the insurance sector

Estimated time:
time
min

<h3><b>tl;dr</b></h3> AI will transform the insurance sector. Two techniques that will have the biggest impact are convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The other key to business success can be found in a mature approach to putting AI in practice -- disciplined productionization. <h2><b>Introduction</b></h2> I just got back from the Insurance Data Science Conference in Zurich, where I gave a talk on how insurance companies can streamline their operations with AI, using the example of passenger car insurance claims. In this article I’d like to share my thoughts from that talk.  In 2017 there were over 15.4 million reported vehicle accidents in the United States. In 2016, there were 16.1 million.  These numbers include passenger cars, light trucks, large trucks, and motorcycles.     <img class="aligncenter size-full wp-image-2227" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b022fd522cd0bc608d6f0f_1people-involved-in-police-reported.webp" alt="" width="512" height="227" /> <p style="text-align: center;">From <a href="https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812696">https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812696</a></p> So insurance companies currently process massive amounts of claim data, and will continue to do so for some time.  Insurance claims processing requires sorting through heterogeneous inputs -- handwriting, photos, video, audio, signs, maps, typed documents, all coming from multiple sources. Insurance companies can <b>streamline</b> claims processing and other business processes with AI. Two modern techniques that will have the biggest impact on insurance are CNNs for <b>Computer Vision</b> and RNNs for <b>Text Processing </b>(handwriting and speech).  The key to business success however is not in model accuracy, but in applying the best practices of productionization (more about that later).   <h2><b>AI will transform insurance thanks to recent advances</b></h2> Claim processing often begins with a photo of the vehicle’s damage. Instead of having a human verify these images, we can have an AI model that will recognize and classify the type of damage. Moreover, it will verify that the photo actually shows the insured car.  <img class="aligncenter size-full wp-image-2230" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65aac350dc850eae396e34fd_1.1-recognizing-damage-3.webp" alt="recognizing damage on car" width="900" height="600" /> <p style="text-align: center;"><i>Results of a visual AI model classifying car damage.</i></p> Signing insurance policies and processing claims requires gathering information like license number, vehicle identification number (VIN), mileage and other dashboard indicators. Insurers receive scans of vehicle documents, and photos of specific parts of the vehicle, like those containing VIN number plates. All of these can be processed by AI instead of a human to streamline policy signing and processing claims. As I mentioned before, the two AI techniques that will have the biggest impact on the insurance sector are convolutional neural networks, which are great at recognizing objects in images, and recurrent neural networks, which can “read” handwriting and “understand” speech.   Neural networks are state-of-the-art for visual tasks, topping even human accuracy. Thanks to the rise of graphical processors and a number of recently discovered techniques, like smarter optimizers, one-cycle fitting, and transfer learning, we now can train much deeper models than ever before. The outcome is that they are now unbeatable in most visual tasks and they are becoming widely used in business. <h2><b>How visual AI models work</b></h2> The core concept that made neural networks so awesome in visual tasks is convolutions. The concept is modelled after the human brain -- this is how we see things. Our brain doesn’t have just one part that is responsible for all object recognition. Instead, we have parts of our eyes, neural system and primary visual cortex, from initial ones responsible for recognizing simple shapes like straight line or curve, to more and more complex shapes and patterns and finally to faces, and entire objects. The amazing thing is that when we train a neural network, it learns what to do on each of these levels, and we don’t have to specify any of them.<img class="aligncenter size-full wp-image-2231" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b02300952f213de6d54224_visual-AI-model.webp" alt="how visual ai models work" width="772" height="344" /> So how does it work?  What happens inside such a network? I think it is best understood thanks to visualizations like In the images below. For each layer we see the features that the network learned and examples of images best matching these features. The first layer is only able to recognize very simple structures like straight lines and some gradients of colors. Then as we progress through the layers, the objects and features that are recognized become increasingly complex. In the second layer we can see there are rounded shapes. In the 3rd layer we can already see repeating shapes and objects such as a wheel and a part of a car. Later we have dog faces, even more complex features, and finally full objects, like flowers, humans and bikes in the last layer. <img class="aligncenter size-full wp-image-2232" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b023018587215be0b1ee41_LAYERS.webp" alt="LAYERS OF AN AI MODEL" width="1157" height="824" /> <p style="text-align: center;">Source: <a href="https://arxiv.org/abs/1311.2901">Visualizing and Understanding Convolutional Networks</a>, Matthew D. Zeiler and Rob Fergus</p> <h2><b>Text Recognition with RNNs</b></h2> When it comes to recognizing text, such as VIN numbers, license plates and vehicle registration documents, the most important modern technique is recurrent neural networks (RNNs).  In particular, architectures based on LSTM (Long Short-Term Memory) allow us to recognize text more precisely.  An LSTM are able to remember both information that it has seen recently, as well as information it has seen long ago.  The result is that it improves accuracy to unseen levels. Here are a few examples:  <img class="aligncenter size-full wp-image-2233" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b02303f20e4b4e372c57b6_house_read.gif" alt="AN RNN READS ADDRESS SIGNS" width="224" height="400" /> <p style="text-align: center;">Source: <a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/">http://karpathy.github.io/2015/05/21/rnn-effectiveness/</a>  </p> First example: an algorithm learns a recurrent network policy that steers its attention around an image; In particular, it learns to read out house numbers from left to right.   Another LSTM example: automatic transcription without a prior segmentation into lines, which was critical in previous approaches:  <img class="aligncenter size-full wp-image-2234" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b39d3a279eef575e7bb11a_pasted-image-0-1.webp" alt="LSTM example: automatic transcription without a prior segmentation into lines," width="1600" height="160" /> <p style="text-align: center;">Source:  <a href="https://ieeexplore.ieee.org/document/8270105">https://ieeexplore.ieee.org/document/8270105</a></p> For more about CNNs and RNNs, here is a nice <a href="https://blogs.nvidia.com/blog/2018/09/05/whats-the-difference-between-a-cnn-and-an-rnn/">description</a>. <h2><b>Productionization Is the Third Key</b></h2> Accuracy of a model is important, but the key to business success is a mature approach to putting the AI in production. In our experience, key points for successful productionization of an AI model are the following:  <ol><li style="font-weight: 400;"><b>Data validation</b>. The initial dataset that you use to train the ML model is important, but this isn't a one-time event.  Typically we receive new data as time goes on, and we want to update the model with the new data to teach it model new things. Automated data verification is therefore necessary, so that we can immediately get alerted if there are  problems with the data.</li><li style="font-weight: 400;"><b>Interpretability</b>. Especially important for insurance.  Insurance professionals need to be prepared to explain their decisions to the users of their products.  Recommendations from a model are only helpful if the decision making process is explainable.</li><li style="font-weight: 400;"><b>Reliability and scaling</b>. When putting a model into production we need a plan for how it will scale. We need to plan for events like spikes in usage or server outage.  We design the system to be deployed on multiple servers in the cloud, so that new servers can be added at any time and there’s no single point of failure. We prefer to package models in Docker containers that can be easily scaled and managed.</li><li style="font-weight: 400;"><b>Human augmentation and oversight</b>. We shouldn't think of A.I. as a solution that does everything for you. Instead, think of it as I.A. -- intelligence augmentation, and design how humans can work together with the model and oversee its operation. The model can help insurance pros with repetitive tasks, but in the end it is the human that makes the decision.</li><li style="font-weight: 400;"><b>Automated model update</b>. For successful implementation of AI model, it needs to learn new things based on new data. This should be fully automated, and not a manual process. This way we avoid possible errors and ensure that updates happen frequently.</li><li style="font-weight: 400;"><b>User interface.  </b>It's important to match your state-of-the-art ML model with a user interface that the end-users can employ in their tasks without friction. Depending on the use case, this can be an API called by other systems or a <a href="https://appsilon.com/how-we-built-a-shiny-app-for-700-users/">human-friendly Shiny dashboard</a>. </li></ol> In the coming weeks we will release another article describing the best practices for productionization which can be applied to AI as well as a host of other data science projects. <h2><b>Time for AI in insurance is now</b></h2> Deep learning architectures based on CNNs and RNNs have gotten to a point where we can successfully apply them to a range of problems in the insurance sector. With the right approach to productionization, these solutions will bring insurance processes to new levels of performance. The time for AI in insurance is now, and we’re excited to be a part of this change. You can find me on Twitter <a href="https://twitter.com/marekrog?lang=en">@marekrog</a>.

Contact us!
Damian's Avatar
Damian Rodziewicz
Head of Sales
community
data analytics
ai&research