Satellite Image Analysis with fast.ai for Disaster Recovery

By:
Marek Rogala
April 27, 2020

[UPDATE - December 2020] - Due to the large interest in satellite imagery analysis, we decided to write a FAQ article answering your top questions. You can read it <a href="https://appsilon.com/satellite-image-analysis-faq/" target="_blank" rel="noopener noreferrer">here</a>. <h3 style="text-align: center;">By <a href="https://www.linkedin.com/in/marrogala/" target="_blank" rel="noopener noreferrer">Marek Rogala</a> and <a href="https://www.linkedin.com/in/swiezew/" target="_blank" rel="noopener noreferrer">Jędrzej Świeżewski, PhD</a></h3> <em>In this article, we focus on the technical aspects of the machine learning solution that we implemented for the xView2 competition. We created ML models to assess structural damage by analyzing satellite images taken before and after natural disasters. We used PyTorch to build our models and fast.ai to develop their critical parts. Test out our solution <a href="https://demo.appsilon.ai/building_damage_assessment/" target="_blank" rel="noopener noreferrer">here</a>. </em> <img class="aligncenter wp-image-4034 size-full" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0226b50b6cc5df6a3dd60_graph-xview2.webp" alt="" width="2016" height="1152" /> <h2>Introduction</h2> The <a href="https://appsilon.com/computer-vision/" target="_blank" rel="noopener noreferrer">Appsilon</a> Machine Learning team recently took part in the <a href="https://xview2.org" target="_blank" rel="noopener noreferrer">xView2</a> competition organized by the Defense Innovation Unit (United States Department of Defense). Participants set out to utilize satellite imagery data to assist humanitarian efforts during natural disasters. We were asked to build ML models using the novel <a href="http://openaccess.thecvf.com/content_CVPRW_2019/papers/cv4gc/Gupta_Creating_xBD_A_Dataset_for_Assessing_Building_Damage_from_Satellite_CVPRW_2019_paper.pdf" target="_blank" rel="noopener noreferrer">xBD dataset</a> provided by the organizers to estimate the damage to infrastructure with the goal of reducing the amount of human labor and time required to plan an appropriate response. You can read more about the details of the competition and the dataset in our previous <a href="https://appsilon.com/ai-for-assisting-in-natural-disaster-relief-efforts-the-xview2-competition/" target="_blank" rel="noopener noreferrer">blog post</a> on the topic. The models we submitted achieve high accuracy – 0.83792 for localization, 0.65355 for damage, and 0.70886 overall. Whilst some competitors submitted even more accurate models, we believe that for the model to be useful it must have an interface that enables everyone to leverage its capabilities. Therefore, we developed <a href="https://demo.appsilon.ai/building_damage_assessment/" target="_blank" rel="noopener noreferrer">an intuitive user interface</a> and implemented it in Shiny using our own <a href="https://github.com/Appsilon/shiny.semantic" target="_blank" rel="noopener noreferrer">shiny.semantic</a> open-source package.  In this article, we focus on the technical aspects of our solution and share our experiences in building machine learning models that are able to accurately localize buildings and assess their damage. We used PyTorch to build our models and fast.ai to develop their critical parts. This remarkable library allows for rapid experimentation and utilization of the latest technical developments in the field of Computer Vision. <h2>The Damage Assessment App</h2> We implemented our models within a <a href="https://demo.appsilon.ai/building_damage_assessment/" target="_blank" rel="noopener noreferrer">Shiny app</a> that allows the user to explore the impact of several real-world natural disasters by running our model on built-in scenarios such as Hurricane Florence and the Santa Rosa Wildfire.  One of the scenarios, the September 2018 earthquake in Indonesia, caused a major tsunami that reached the provincial capital of Palu. This caused significant property damage in the area:  <img class="aligncenter size-full wp-image-4053" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0226c6cd8f7584c8367b1_1_palu_2-v2.gif" alt="" width="934" height="574" /> <h2>The Data</h2> The task required us to build our model using satellite imagery data for a number of regions that have recently experienced natural disasters. The dataset was very diverse with affected locations ranging from remote, forested areas, through industrial districts with large buildings to dense urban areas. <style> table, th, td {border: 10px solid transparent;}</style> <table> <tbody> <tr> <td></td> <td></td> </tr> <tr> <td><img class="aligncenter wp-image-3893" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0226d6cd8f7584c8367dc_2unnamed.webp" alt="" width="400" height="400" /></td> <td><img class="aligncenter wp-image-3894" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0226fd31e8920363c886b_3unnamed.webp" alt="volcanic eruption" width="400" height="400" /></td> </tr> <tr> <td><img class="aligncenter wp-image-3890" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b02270541bf136cb7d0478_4hurricane-florence_00000122_pre_disaster.webp" alt="Hurrican Florence Before Disaster" width="400" height="400" /></td> <td><img class="aligncenter wp-image-3895" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0227077066e9c82be9563_5unnamed.webp" alt="" width="400" height="400" /></td> </tr> <tr> <td></td> <td></td> </tr> </tbody> </table> <p style="text-align: center;"> <em>Diverse locations and building sizes are a challenge when building computer vision models.</em></p> Another complicating factor was the number of different types of disasters – aside from localizing the buildings, the model must also be able to assess damage to structures. That requires a different approach depending on whether the area was destroyed by a fire or a flood. There was a number of different disaster types represented in the dataset including: <table> <tbody> <tr> <td><img class="aligncenter size-full wp-image-4054" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b02271f7981e509d4f79f8_guatemala-volcano_00000018_pre_disaster-v2.gif" alt="Volcanic Eruptions Before and After" width="460" height="460" /></td> <td><img class="aligncenter size-full wp-image-4055" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b022743b74359ce542f817_hurricane-harvey_00000111_and_michael_76-v2.gif" alt="Hurricane Harvey Before and After" width="460" height="460" /></td> </tr> <tr> <td style="text-align: center;"><em>Volcanic eruptions (pre and post)</em></td> <td style="text-align: center;"><em>Hurricanes (pre and post)</em></td> </tr> <tr> <td><img class="aligncenter size-full wp-image-4056" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b02274952f213de6d539ed_nepal-flooding_00000105_and_100-v2.gif" alt="Nepal Flooding Before and After" width="460" height="460" /></td> <td><img class="aligncenter size-full wp-image-4057" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b02275fb4af0620514b7ef_palu-tsunami_00000073_and_178_and_182-v2.gif" alt="Palu Tsunami Before and After" width="460" height="460" /></td> </tr> <tr> <td style="text-align: center;"><em>Disastrous floods (pre and post)</em></td> <td style="text-align: center;"><em>Tsunamis (pre and post)</em></td> </tr> <tr> <td><img class="aligncenter size-full wp-image-4058" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b022769ba690fd799866a5_santa-rosa-wildfire_00000007_and_36-v2.gif" alt="Santa Rosa Wildfire Before and After Satellite" width="460" height="460" /></td> <td><img class="aligncenter size-full wp-image-4059" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b022773eb2cd82480a6a10_tuscaloosa-tornado_00000023_pre_disaster-v2.gif" alt="Tuscaloosa Tornado Satellite Before and After" width="460" height="460" /></td> </tr> <tr> <td style="text-align: center;"><em>Raging wildfires (pre and post)</em></td> <td style="text-align: center;"><em>Tornado damage (pre and post)</em></td> </tr> <tr> <td style="padding-right: 150px; padding-left: 150px;" colspan="2"><img class="aligncenter size-full wp-image-4060" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0227945aea1a61422e187_pinery-bushfire_00000048_pre_disaster-v2.gif" alt="Pinery Bushfire Before and After Satellite" width="460" height="460" /></td> </tr> <tr> <td style="text-align: center;" colspan="2"><em>Bushfires (pre and post)</em></td> </tr> </tbody> </table> <h2 style="text-align: left;">Why use satellite imagery?</h2> The applications of satellite imagery are proliferating as the data itself improves – we are gaining access to more frequent and accurate snapshots of the earth’s surface. These data have a number of intrinsic benefits. Aside from frequency, retrievability of historical images, and their increasingly high level of quality, researchers, and practitioners utilize the fact that satellite images expand into other parts of the electromagnetic spectrum. This opens new opportunities for analysis unavailable to the human eye. Some satellites offer as many as 12 different channels of information, including not only visible light but, for example, also infrared signals - we covered in one of our previous blog posts <a href="https://appsilon.com/how-to-acquire-large-satellite-image-datasets-for-machine-learning-projects/" target="_blank" rel="noopener noreferrer">how to acquire such data</a>. At Appsilon we have utilized satellite imagery for a number of <a href="https://appsilon.com/ship-recognition-in-satellite-imagery-part-i/" target="_blank" rel="noopener noreferrer">commercial projects</a>, but this was our first venture into applying this expertise to humanitarian aid. The project leveraged a unique feature of satellite data: availability of both historical images of a given area and new images that represent the most recent conditions. These are critical for assisting humanitarian response – the damage assessment must be conducted as quickly as possible to develop an appropriate plan of remedial action and any new developments must be accounted for. Relieving response planners from the burden of sifting through thousands of images or conducting in-person surveys allows for focusing the scarce resources on the actual service delivery.  Overall, utilization of satellite imagery-based AI tools results in a significant reduction in time required to take appropriate action and enables the response effort to save more lives. <blockquote><em>The conclusion of the xView2 competition in January 2020 overlapped with the most dire period during the Australian wildfires – the models developed during the challenge were quickly deployed as open-source solutions to assist damage assessment and recovery efforts.</em></blockquote> <h2>The machine learning pipeline</h2> Developing an accurate ML model for satellite imagery analysis comprises two tasks: building localization and building damage assessment. While this can be done with a single model, we decided to build a separate model for each of these tasks. Given the time pressure, we also reused parts of code from the official <a href="https://github.com/DIUx-xView/xview2-baseline/tree/master/model" target="_blank" rel="noopener noreferrer">baseline model</a>, including the code for generating final masks for submission from predictions. We implemented the models in PyTorch and used the fast.ai wrapper for the damage classification task.  This approach has a few pitfalls related to the fact that each model requires a separate training dataset with the same preprocessing schedule. Preprocessing entails introducing changes to color histograms of the images, as well as cropping, padding, flipping, and other data augmentation methods. Therefore, we quickly ran into the challenge of managing a large number of moving parts. At one point during the competition, we submitted a solution where we performed inference on slightly different parameters than the parameters used for model training resulting, not surprisingly, in a very low score. To prevent this from happening again, we employed an appropriate ML pipeline so that our training process is fully reproducible whilst remaining efficient. We achieved this by basing the pipeline on our internal framework. It memorized the results of all the steps allowing us to reuse them, and it automatically ran the computation if any of the hyperparameters or dependencies changed. This made our development process much faster, whilst safeguarding us from making mistakes. The specific steps were: <img class="aligncenter wp-image-4033 size-full" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0227ad31e8920363c90bf_graph-xview2-pipeline.webp" alt="" width="2500" height="687" /> <ol><li style="font-weight: 400;">Input images preprocessing (e.g., image augmentation)</li><li style="font-weight: 400;">Localization model training</li><li style="font-weight: 400;">Building localization (inference from the model)</li><li style="font-weight: 400;">Building cut-outs</li><li style="font-weight: 400;">Damage classification model training</li><li style="font-weight: 400;">Damage classification (inference from the model)</li><li style="font-weight: 400;">Submission / post-processing</li></ol> An appropriate ML pipeline not only ensures a robust, error-free development process, it also enhances the <b>reproducibility</b> of the solution. It is much easier to understand the resulting model if we understand the details of the steps taken during its development. <h2>Technical takeaways</h2> The ML pipeline proved critical when working with two stacked models (localization and damage assessment). We would also like to share two other techniques, which enabled us to deliver an accurate model in a limited timeframe. <h3>Transfer learning for localization</h3> For localization, we have used transfer learning to apply one of the best models developed for a SpaceNet competition to the context of building damage assessment in natural disaster response.  Localization constituted just 30% of the final score in the xView2 competition. However, it greatly affected the final score, because the model can only classify damage to a building if it found this building in the first place. Essentially, the better the localization model, the higher the potential score for damage assessment. Neural networks based on the UNet architecture are designed for segmentation problems making them suitable for this task. UNet architectures solve segmentation problems by first encoding the image to a smaller resolution representation and then decoding it back to create a segmentation mask. However, the process of developing appropriate weights for the segmentation exercise can be time-consuming. Fortunately, localization of buildings on images is a well-explored problem and we were able to leverage existing, state-of-the-art ML localization solutions developed through a series of <a href="https://github.com/SpaceNetChallenge" target="_blank" rel="noopener noreferrer">SpaceNet competitions</a> to build our model. Specifically, we set out to use <a href="https://solaris.readthedocs.io/en/latest/pretrained_models.html" target="_blank" rel="noopener noreferrer">pre-trained weights</a> created by XDXD for one of the top solutions in SpaceNet4, built using a VGG-16 encoder. <img class="size-full wp-image-3900" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0227c9ba690fd79986ab8_14pasted-image-0.webp" alt="UNet diagram" width="711" height="360" /> <em>UNet is a natural choice for segmentation problems such as building localization in satellite images (example of a UNet architecture by Mehrdad Yazdani - Own work, CC BY-SA 4.0, <a href="https://commons.wikimedia.org/w/index.php?curid=81055729" target="_blank" rel="noopener noreferrer">Creative Commons</a>)</em> Once we built the model with pre-existing weights we continued training it on the xView2 data. This allowed us to achieve improved accuracy, whilst saving computation time compared to building a model from scratch.  <h3>7-channel input for classification</h3> We further expedited the training of the damage assessment model by using the 7th channel for identifying the location of the building in an image.  Using the localization model, we were able to identify the locations of individual buildings. We cut each building out of the larger image including a small portion of the surrounding area. The damage score for a given building sometimes depended on its surroundings. For example, a building with an intact roof but fully surrounded by water from a flood was scored to have sustained significant damage. This means we had two cutouts for each building, where its state pre- and post-disaster were depicted (each had 3 channels: RGB).  In addition, we used a 7th channel, a mask highlighting the location of the building in the cutout. The 7th channel allows for quickly identifying the part of the image the classification model should focus on – the building itself. Such an addition increases the size of the model and may slow down the inference process slightly. However, it actually speeds up the training of the model as it learns quicker where to focus its attention, and results in improved accuracy. <img class="size-full wp-image-3897" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0227d6738007ca36727b7_15imageLikeEmbed.webp" alt="Appsilon Inference Process" width="1004" height="579" /> <em>Our inference process: The mask helps the model identify buildings and generate cut-outs. Ultimately, it expedites damage classification.</em> <h2>Conclusion</h2> We believe that this is a prime example of how the data science community can contribute to fighting one of the greatest challenges of our time. As part of our <a href="https://appsilon.com/data-for-good/" target="_blank" rel="noopener noreferrer">Data for Good</a> initiative, we encourage the tech community from around the world to contribute their skills to empower those working at the forefront of <a href="https://appsilon.com/ai4g-a-decision-support-system-for-disaster-risk-management-in-madagascar/" target="_blank" rel="noopener noreferrer">natural disaster management and response</a> to make better use of the latest and most advanced solutions. Make sure to explore our <a href="https://demo.appsilon.ai/building_damage_assessment/" target="_blank" rel="noopener noreferrer">building damage assessment app</a> and follow us for more machine learning and AI for Good content. <h2><b>Follow Appsilon Data Science</b></h2><ul><li>Follow <a href="https://twitter.com/appsilon">@Appsilon</a> on Twitter</li><li>Follow Appsilon on <a href="https://www.linkedin.com/company/appsilon">LinkedIn</a></li><li>Try out our R Shiny <a href="https://appsilon.com/opensource/">open source</a> packages</li><li>Sign up for the Data for Good <a href="https://appsilon.com/ai-for-good/">newsletter</a></li><li>Reach out to our Data For Good Initiative: <a href="mailto:data4good@wordpress.appsilon.com">data4good@wordpress.appsilon.com</a></li></ul> &nbsp;

Have questions or insights?

Engage with experts, share ideas and take your data journey to the next level!

Is Your Software GxP Compliant?

Download a checklist designed for clinical managers in data departments to make sure that software meets requirements for FDA and EMA submissions.
Explore Possibilities

Share Your Data Goals with Us

From advanced analytics to platform development and pharma consulting, we craft solutions tailored to your needs.

Talk to our Experts
python
satellite imagery
data for good
case studies
ai&research