Using AI to Detect Solar Panels From Orthophotos (1/3)

Estimated time:
time
min

This is the first blog of a three-part miniseries on developing an AI to detect solar panels from orthophotos. The articles will be broken down into three project steps: <ol><li>Data collection and pre-processing</li><li><a href="https://appsilon.com/detecting-solar-panels-with-fastai-part-2/" target="_blank" rel="noopener">Training a neural network model using<code class="language-r">fastai</code></a></li><li><a href="https://appsilon.com/using-streamlit-to-deploy-poc-app-part-3/" target="_blank" rel="noopener">Deploying your app to the world wide web using<code class="language-r">streamlit</code></a></li></ol> But before we jump into the project, we need to understand some of the basics. <ul><li><a href="#anchor-1" rel="noopener noreferrer">Solar panels</a></li><li><a href="#anchor-2" rel="noopener noreferrer">What are orthophotos</a></li><li><a href="#anchor-3" rel="noopener noreferrer">Orthophotos for training</a></li><li><a href="#anchor-4" rel="noopener noreferrer">Image segmentation</a></li><li><a href="#anchor-5" rel="noopener noreferrer">Import libraries</a></li><li><a href="#anchor-6" rel="noopener noreferrer">Creating segmentation masks</a></li><li><a href="#anchor-7" rel="noopener noreferrer">Next steps</a></li></ul> <h2 id="anchor-1">Solar panels</h2> The solar panel industry is booming. In 2020, the solar industry generated roughly $25 billion in private funding in the U.S. alone. With more than 10,000 solar companies across the U.S., a nearly 70% decline in installation costs, and competitive utility prices it's no wonder 43% of new electric capacity additions to the American electrical grid have come from solar. Although it only makes up a meager 4% of all U.S. electricity production, the solar industry is seeing clear skies. So how do economists and policymakers track this growth? Where should solar business owners target new markets? Is it even possible to monitor this adoption on such a massive scale? The answer to these questions and more lies in AI. Because in an age when fossil fuels are beginning to darken our days, renewable sources of energy are starting to shine. From geothermal and wind turbines to hydropower and solar, our options are steadily improving. And with declining costs in technologies, adoption has been skyrocketing. But there are still improvements that can be made. Using AI and machine learning we can speed up adoption, improve sales, and track large-scale implementation. So how do we hope to achieve this? By using AI to detect solar panels from satellite and aerial images! <blockquote><strong>AI is being used to increase response times to natural disasters and improve humitarian aid. <a href="https://demo.appsilon.com/apps/building_damage_assessment/" target="_blank" rel="noopener noreferrer">Test Appsilon's AI model for assessing building damage.</a></strong></blockquote> <h2 id="anchor-2">What are orthophotos</h2> Open your google maps, zoom to a city, and turn on the satellite view. Chances are, you're looking at one or two orthophotos. These are typically satellite images but can be aerial photographs from something like a plane or UAV. However, when you take an unprocessed image its features are likely distorted unless viewing from directly above the target area (aka "nadir"). Tall buildings, mountains, basically anything with elevation, will look like they're leaning and straight lines like pipelines will "bend" with the topography. <img class="size-full wp-image-18337" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01ff3fdb8ad62e9cedb0f_orthophoto_vs_airphoto_reupload.webp" alt="Aerial photograph vs orthoimage showcasing distortion effects of terrain relief on a pipeline (Image from USGS, public domain)." width="570" height="386" /> Aerial photograph vs orthoimage showcasing distortion effects of terrain relief on a pipeline (Image from USGS, public domain). In order to correct the parallax displacement from camera tilt and relief, we need to orthorectify the image, i.e. create a mathematical interpolation of the image using ground control points. This allows us to minimize the scale-warping effect as we move away from nadir. In doing so, we've created a photo with a uniform scale that can overlap other maps with minimal spatial errors. <blockquote><strong>Are you using AI for a social good project? Appsilon's data science team can help through our <a href="https://appsilon.com/data-for-good/" target="_blank" rel="noopener noreferrer">Data4Good initiative.</a></strong></blockquote> But why is this important to identifying Solar Panels? Well, because regular satellite images don't have a uniform scale, any attempts to quantify precise distances and areas would be inaccurate. For example, if you're training a model to identify 1 x 1-meter solar panels on a hillside and calculate the total area, the farther away from nadir it goes, the more likely the panels are to distort in size. Your image segmentation and classification will probably be incorrect and your area measurements inaccurate. If you'd like to follow along and create a model of your own, you can find open-source orthophotos in many places including <a href="https://www.arcgis.com/home/webmap/viewer.html" target="_blank" rel="noopener noreferrer">ESRI's orthophoto Basemaps</a> and <a href="https://www.geoportal.gov.pl/dane/ortofotomapa" target="_blank" rel="noopener noreferrer">Poland's Geoportal for orthomaps</a>. <h2 id="anchor-3">Orthophotos for training AI to detect solar panels</h2> You can access the data we'll be using to train our model from <a href="https://figshare.com/articles/dataset/Distributed_Solar_Photovoltaic_Array_Location_and_Extent_Data_Set_for_Remote_Sensing_Object_Identification/3385780" target="_blank" rel="noopener noreferrer">Distributed Solar Photovoltaic Array Location and Extent Data Set</a> for Remote Sensing Object Identification. The dataset consists of 526 images of 5000 x 5000 px and 75 images at 6000 x 4000 covering four areas of California. The dataset also contains CSV files describing the locations of roughly 20,000 solar panels. <img class="size-full wp-image-18339" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01ff431010cba0b0dba93_aerial-photograph-for-ai-full_img-scaled.webp" alt="Aerial photograph of Californian suburbs for AI solar panel detection with fastai" width="2560" height="2560" /> Aerial photograph of Californian suburbs. AI can find solar panels. Can you? <h2 id="anchor-4">Image segmentation as a machine learning task</h2> Once you have your images, you can think about the panel identification task as an image segmentation problem. To solve the problem, you'll need to predict a class for each pixel of an image. In our case, we'll keep it simple with two classes, "solar panel" and "not a solar panel." So, we'll try and predict whether a specific pixel belongs either to a solar panel or not. To do this, we will first prepare training data, i.e. create an image segmentation mask. A segmentation mask is an image with the same bounds as the original image, but with information on the contents at the pixel level with a unique color for each class. This helps parse the image into important areas to focus on rather than processing the image as a whole. There are two types of segmentation: semantic and instance. Semantic segmentation groups pixels of a similar class and assigns the same value to all in that group (e.g., people vs background seen in the image below). In our case, we'll be creating semantic segmentation. If you're looking to perform additional tasks like assigning subclasses or counting classes, you'll want to create an instance segmentation mask. <img class="size-full wp-image-18341" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01ff56f892f05d57f4f42_semantic-vs-instance-segmentation-in-ai.webp" alt="Semantic segmentation [left] and instance segmentation [right] (Image credit: www.analyticsvidhya.com)." width="1263" height="384" /> Semantic segmentation [left] and instance segmentation [right] (Image credit: www.analyticsvidhya.com). <h2 id="anchor-5">Import libraries</h2> To build the model yourself, you'll need to import a few libraries and load the necessary data. I'll show you how, below: <script src="https://gist.githubusercontent.com/pstorozenko/adb24126829e3630856ff018a26fbb87.js?file=load_libraries.py"></script> <pre><code> from glob import glob from pathlib import Path from typing import List, Tuple <br>import numpy as np import pandas as pd from PIL import Image, ImageDraw from tqdm.contrib.concurrent import process_map <br>root = Path('.') df1 = pd.read_csv(root / 'polygonDataExceptVertices.csv') df2 = pd.read_csv(root / 'polygonVertices_PixelCoordinates.csv') <br>df1.set_index('polygon_id', inplace=True) df2.set_index('polygon_id', inplace=True) all_images = df1['image_name'].unique() </code></pre> As we will discover later,<code class="language-r">polygon_id</code>is a unique column that serves as an ID. We should immediately set it as a <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html" target="_blank" rel="noopener noreferrer">pandas index</a>. We can also explore a few columns from the<code class="language-r">polygonDataExceptVertices.csv</code>and<code class="language-r">polygonVertices_PixelCoordinates.csv</code>files. <script src="https://gist.githubusercontent.com/pstorozenko/adb24126829e3630856ff018a26fbb87.js?file=preview_df1_2.py"></script> <pre><code> df1.loc[:, ['image_name', 'city']] df2.iloc[:, :8] </code></pre> The first file details the folder structure and geospatial data related to the polygons of individual solar panel islands. <style type="text/css"><!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}--></style> <table dir="ltr" border="1" cellspacing="0" cellpadding="0"><colgroup> <col width="100" /> <col width="100" /> <col width="100" /> <col width="100" /></colgroup> <tbody> <tr> <td style="text-align: center;"></td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;polygon_id&quot;}">polygon_id</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;image-name&quot;}">image-name</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;city&quot;}">city</td> </tr> <tr> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:0}">0</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:1}">1</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;11ska460890&quot;}">11ska460890</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;Fresno&quot;}">Fresno</td> </tr> <tr> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:1}">1</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2}">2</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;11ska460890&quot;}">11ska460890</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;Fresno&quot;}">Fresno</td> </tr> <tr> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2}">2</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3}">3</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;11ska460890&quot;}">11ska460890</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;Fresno&quot;}">Fresno</td> </tr> <tr> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3}">3</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:4}">4</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;11ska460890&quot;}">11ska460890</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;Fresno&quot;}">Fresno</td> </tr> <tr> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:4}">4</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:5}">5</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;11ska460890&quot;}">11ska460890</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;Fresno&quot;}">Fresno</td> </tr> </tbody> </table> Each row in the second file describes a single polygon with the coordinates of its vertices. <style type="text/css"><!--td {border: 1px solid #ccc;}br {mso-data-placement:same-cell;}--></style> <table dir="ltr" border="1" cellspacing="0" cellpadding="0"><colgroup> <col width="100" /> <col width="100" /> <col width="100" /> <col width="100" /> <col width="100" /> <col width="100" /> <col width="100" /> <col width="100" /> <col width="100" /></colgroup> <tbody> <tr> <td></td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;polygon_id&quot;}">polygon_id</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;number_vertices&quot;}">number_vertices</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;lon1&quot;}">lon1</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;lat1&quot;}">lat1</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;lon2&quot;}">lon2</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;lat2&quot;}">lat2</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;lon3&quot;}">lon3</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:2,&quot;2&quot;:&quot;lat3&quot;}">lat3</td> </tr> <tr> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:0}">0</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:1}">1</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:8}">8</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3360.5}">3360.5</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:131.631}">131.631</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3249.2}">3249.2</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:87.9851}">87.9851</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3213.75}">3213.75</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:73.85}">73.85</td> </tr> <tr> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:1}">1</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2}">2</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:9}">9</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3361.15}">3361.15</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:69.6154}">69.6154</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3217.62}">3217.62</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:12.8462}">12.8462</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3217.44}">3217.44</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:13.0961}">13.0961</td> </tr> <tr> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2}">2</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3}">3</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:11}">11</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3358.02}">3358.02</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:48.1369}">48.1369</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3358.05}">3358.05</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:48.15}">48.15</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3360.74}">3360.74</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:42.3793}">42.3793</td> </tr> <tr> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:3}">3</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:4}">4</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:9}">9</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2571.59}">2571.59</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2068.05}">2068.05</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2571.02}">2571.02</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2067.39}">2067.39</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2571.35}">2571.35</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2067.15}">2067.15</td> </tr> <tr> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:4}">4</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:5}">5</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:9}">9</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2563.78}">2563.78</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2091.4}">2091.4</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2504.42}">2504.42</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2021.67}">2021.67</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2503.98}">2503.98</td> <td style="text-align: center;" data-sheets-value="{&quot;1&quot;:3,&quot;3&quot;:2022.05}">2022.05</td> </tr> </tbody> </table> <h2 id="anchor-7">Creating segmentation masks for the solar panel detection algorithm</h2> To create segmentation masks we will process every image separately. Then for each image, we will locate and draw all polygons. We will use <code>ImageDraw</code> the PIL package to easily draw polygons on the images. <code>Image.new("L", size)</code> allows us to create a new image with one channel - black/white as this is what we need for the segmentation mask. We fill polygons with value 1 as the <code>fastai</code> library expects each class to have a sequential class number. Since there are over 40GB of image files to process, it will take some time. Because every image can be processed simultaneously, we can easily parallelize calculations. Parallel map functionality with a progress bar is provided by the <code>tqdm</code> package, that's what <code>process_map</code> function does. The <code>process_map</code> function requires a single argument so we have to wrap <code>process_image</code> as the function <code>wrap_process_image</code> does. Line 26 in <code>process_all_images</code> converts a list of lists into a unified list. Since <code>image_data</code> is a list of lists of tuples of the same length, it is easily convertible to pandas dataframe. <script src="https://gist.githubusercontent.com/pstorozenko/adb24126829e3630856ff018a26fbb87.js?file=process_image.py"></script> <pre><code> def process_image(img_name: str, rawimg_dir: Path, lab_dir: Path, img_dir: Path, out_s: int) -&gt; List[Tuple]:    intr_polys = df1.query("image_name == @img_name")    im0 = Image.open(rawimg_dir / (img_name + ".tif"))    ims = Image.new("L", im0.size)    draw = ImageDraw.Draw(ims)    for polyid, row in df2.iterrows():        n_vert = int(row["number_vertices"])        poly_verts = row[1:(1 + 2 * n_vert)]        # a tricky way to obtain list of tuples of [(x[0], x[1]), (x[2], x[3]), ...]        points = list(zip(poly_verts[::2], poly_verts[1::2]))        if len(points) == 0:  # in case of missing values            print(f"Polyid: {polyid} has no points")            continue        draw.polygon(points, fill=1)  # fill polygon with ones    k1 = im0.size[0] // out_s    k2 = im0.size[1] // out_s    split_save_image(im0, img_name, img_dir, k1, k2)    l = split_save_image(ims, img_name, lab_dir, k1, k2, True)    return l <br>def wrap_process_image(img_name: str) -&gt; List[Tuple]:    return process_image(img_name, root/"raw_images", root/"labels", root/"images", 500) <br>def proccess_all_images(all_images: List[str], max_workers: int) -&gt; pd.DataFrame:    images_data: List[List[Tuple]] = process_map(wrap_process_image, all_images, max_workers=max_workers)    return pd.DataFrame([ff for l in images_data for ff in l], columns=["file", "fill"]) <br>max_workers = 4 res = proccess_all_images(all_images, max_workers) </code></pre> The last missing piece is the <code>split_save_image</code> function. Original images of size 5000x5000px or 6000x4000px are far too big to process in a neural network. We have to split them into smaller pieces. That what <code>split_save_image</code> is for. <script src="https://gist.githubusercontent.com/pstorozenko/adb24126829e3630856ff018a26fbb87.js?file=split_save_image.py"></script> <pre><code> def split_save_image(im: Image, root:str, dir: Path, k1: int, k2: int, meta: bool = False) -&gt; List[Tuple]:    h, w = im.size    dh = h / k1    dw = w / k2        l: List[Tuple] = []    for i in range(k1):        for j in range(k2):            imc = im.crop((i*dh, j*dw, (i+1)*dh, (j+1)*dw))            fname = dir/(f"root_{i}_{j}.png")            imc.save(fname)            if meta:                nz_ratio = np.asarray(imc).sum() / (dh * dw)                l.append((fname, nz_ratio))    return l </code></pre> Here we decided to split images into 500 x 500 px patches. Because every image is split into 100 or 96 patches, many won't contain solar panel fragments. But we won't toss these out. At least not yet, because the images that are <em>empty</em> might prove to be useful later. If we pass the argument <code>meta=True</code> then after saving the image, the percentage of pixels occupied by solar panels will be calculated and added to the list. The example patch has been presented above. <blockquote><strong>Through Data4Good, Appsilon helps build support systems for</strong> <strong><a href="https://appsilon.com/ai4g-a-decision-support-system-for-disaster-risk-management-in-madagascar/" target="_blank" rel="noopener noreferrer">disaster risk management in Madagascar.</a></strong></blockquote> The script will run for a while and process every image in the <code>raw_images</code> directory. It might take about 20 minutes. The progress bar will help to track the remaining time. The last thing to do is to save the data frame with a pixel-fill ratio. <script src="https://gist.githubusercontent.com/pstorozenko/adb24126829e3630856ff018a26fbb87.js?file=save_results.py"></script> <pre><code> res.to_csv("fill_ratio.csv", index=False) res.head() </code></pre> Now we end up with 59800, 500 x 500 px patches, and corresponding segmentation masks. Finally, we can visualize a sample patch and corresponding mask: <img class="aligncenter size-full wp-image-18345" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01ff784868bab9cc442aa_ai-training-patch.webp" alt="" width="500" height="500" /> <img class="size-full wp-image-18343" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b01fac2829666b590cadca_patch_solars.webp" alt="Mask containing polygons of solar panels." width="500" height="500" /> A mask containing polygons of solar panels. <h2 id="anchor-7">Next Steps in building an AI to detect solar panels</h2> To recap, in the first of this series, we downloaded orthophotos of solar panels in California and processed them by creating corresponding segmentation masks. Later we split images into smaller pieces to make them consumable for a neural network. In part two we will use the <code>fastai</code> library to train a PoC model for solar panel detection. Stay tuned! <blockquote><strong>Use Shiny to build elegant, engaging apps to better serve at-risk communities. Explore the possibilities of Shiny with <a href="https://demo.appsilon.com/apps/visuarisk/" target="_blank" rel="noopener noreferrer">Appsilon's VisuaRISK application.</a></strong></blockquote>

Contact us!
Damian's Avatar
Damian Rodziewicz
Head of Sales
satellite imagery
data for good
fastai
streamlit
ai&research