<b>Image classification</b> is one of the many exciting applications of <b>convolutional neural networks. </b>Aside from simple image classification, there are plenty of fascinating problems in computer vision, with <b>object detection </b>being one of the most interesting. YOLO (“You Only Look Once”) is an effective real-time object recognition algorithm, first described in the seminal 2015 paper by Joseph Redmon et al. In this article, we introduce the concept of object detection, the YOLO algorithm itself, and one of the algorithm's open-source implementations: Darknet.
If you're ready to live life to the fullest and Carpe Imaginem, continue reading. We promise to minimize our use of outdated slang terms.
<strong>Updated</strong>: March 17, 2022.
<ul><li><a href="#anchor-1" target="_blank" rel="noopener noreferrer">Object Detection Overview</a></li><li><a href="#anchor-2" target="_blank" rel="noopener noreferrer">Understanding YOLO Object Detection: The YOLO Algorithm</a></li><li><a href="#yolo-variants">YOLO Variants - Everything You Need to Know</a></li><li><a href="#anchor-3" target="_blank" rel="noopener noreferrer">Darknet: A YOLO Implementation</a></li></ul>
<hr />
<h2 id="anchor-1"><strong>Object detection overview</strong></h2>
Object detection is commonly associated with self-driving cars where systems blend computer vision, LIDAR, and other technologies to generate a multidimensional representation of the road with all its participants. It is also widely used in video surveillance, especially in crowd monitoring to prevent terrorist attacks, count people for general statistics or analyze customer experience with walking paths within shopping centers.
<h3>How does object detection work</h3>
To explore the concept of object detection it's useful to begin with image classification. Image classification goes through levels of incremental complexity.
<ol><li><b>Image classification</b>aims at assigning an image to one of a number of different categories (e.g. car, dog, cat, human, etc.), essentially answering the question <em>“What is in this picture?”</em>. One image has only one category assigned to it. </li><li><b>Object localization</b>allows us to locate our object in the image, so our question changes to <em>“What is it and where it is?”</em>.</li><li><b>Object detection</b>provides the tools for doing just that – finding all the objects in an image and drawing the so-called <b>bounding boxes </b>around them.</li></ol>
In a real real-life scenario, we need to go beyond locating just one object but rather multiple objects in one image. For example, a <b>self-driving car</b> has to find the location of other cars, traffic lights, signs, and humans and take appropriate action based on this information.
In the case of bounding boxes, there are also some situations where we want to find the exact boundaries of our objects. This process is called <b>instance segmentation</b>, but this is a topic for another post.
<img class="size-full wp-image-1127" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0224740e2bdeffdb13ac3_types.webp" alt="Image 1 - Instance segmentation process" width="640" height="270" /> Instance segmentation process
<h3>Object detection algorithms</h3>
There are a few different algorithms for object detection and they can be split into two groups.
<h3><strong>Algorithms based on classification</strong></h3>
They are implemented in two stages:
<ol><li> They select regions of interest in an image.</li><li>They classify these regions using convolutional neural networks.</li></ol>
This solution can be slow because we have to run predictions for every selected region. A widely known example of this type of algorithm is the Region-based convolutional neural network (RCNN) and its cousins Fast-RCNN, Faster-RCNN, and the latest addition to the family: Mask-RCNN. Another example is RetinaNet.
<h3><strong>Algorithms based on regression</strong></h3>
Instead of selecting interesting parts of an image, they predict classes and bounding boxes for the whole image <b>in one run of the algorithm</b>. The two best-known examples from this group are the <b>YOLO (You Only Look Once)</b> family algorithms and SSD (Single Shot Multibox Detector). They are commonly used for real-time object detection as, in general, they trade a bit of accuracy for large improvements in speed.
<h2 id="anchor-2"><b>Understanding YOLO object detection: the YOLO algorithm</b></h2>
To understand the YOLO algorithm, it is necessary to establish what is actually being predicted. Ultimately, we aim to predict a class of an object and the bounding box specifying object location. Each bounding box can be described using four descriptors:
<ol><li style="font-weight: 400;">center of a bounding box (<b>b</b><b>x</b><b>b</b><b>y</b>)</li><li style="font-weight: 400;">width (<b>b</b><b>w</b>)</li><li style="font-weight: 400;">height (<b>b</b><b>h</b>)</li><li style="font-weight: 400;">value <b>c</b>is corresponding to a class of an object (e.g., car, traffic lights, etc.)</li></ol>
<blockquote>To learn more about PP-YOLO (or PaddlePaddle YOLO), which is an improvement on YOLOv4, read our explanation of why PP-YOLO is faster than YOLOv4.</blockquote>
In addition, we have to predict the pc value, which is the probability that there is an object in the bounding box.
<img class="size-full wp-image-1141" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b022470279bfd450379c66_bbox-1.webp" alt="Image 2 - Bounding box probability calculation" width="803" height="338" /> Bounding box probability calculation
As we mentioned above, when working with the YOLO algorithm we are not searching for interesting regions in our image that could potentially contain an object.
Instead, we are splitting our image into cells, typically using a 19x19 grid. Each cell is responsible for predicting 5 bounding boxes (in case there is more than one object in this cell). Therefore, we arrive at a large number of 1805 bounding boxes for one image. Rather than seizing the day with #YOLO and Carpe Diem, we're looking to seize object probability. The exchange of accuracy for more speed isn't reckless behavior, but a necessary requirement for faster real-time object detection.
<img class="size-full wp-image-1142" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b02248d9e8a647cdc538d9_yolo-1.webp" alt="Image 3 - Image size-reduction" width="1633" height="904" /> Image size-reduction
Most of these cells and bounding boxes will not contain an object. Therefore, we predict the value pc, which serves to remove boxes with low object probability and bounding boxes with the highest shared area in a process called <b>non-max suppression</b>.
<img class="size-full wp-image-1143" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b022493dff82577ed29b2b_nonmax-1.webp" alt="Image 4 - Non-max suppression" width="1558" height="718" /> Non-max suppression
<h2 id="yolo-variants">YOLO Variants - Everything You Need to Know</h2>
By now you have a good idea of what the YOLO algorithm represents. If you want to learn more, you're in luck. We'll now go over 11 different algorithm variations released over the years, and then make a brief summary to find out which is the best and the most recent flavor.
<h3>YOLOv1</h3>
The first YOLO version was announced in 2015 by Joseph Redmon, <a href="https://arxiv.org/search/cs?searchtype=author&query=Divvala%2C+S">Santosh Divvala</a>, <a href="https://arxiv.org/search/cs?searchtype=author&query=Girshick%2C+R">Ross Girshick</a>, and <a href="https://arxiv.org/search/cs?searchtype=author&query=Farhadi%2C+A">Ali Farhadi</a> in the article <a href="https://arxiv.org/pdf/1506.02640.pdf">“You Only Look Once: Unified, Real-Time Object Detection”</a>. Not long after, YOLO dominated the object-detection field and became the most popular algorithm used, because of its speed, accuracy, and learning ability.
Instead of treating object detection as a classification problem, the authors thought about it as a regression task concerning spatially separated bounding boxes and associated class probabilities, using a single neural network. The YOLOv1 processed images in real-time at 45 frames per second, while a smaller version - Fast YOLO - reached 155 frames per second and still achieved double the mAP of other real-time detectors.
<img class="size-full wp-image-12327" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0207fd014a3cd31433787_5-5.webp" alt="Image 5 - YOLOV1 algorithm scheme" width="2360" height="1514" /> YOLOV1 algorithm scheme
<h3>YOLOv2</h3>
YOLOv2 (sometimes called YOLO9000), was released a year later, in 2016 also by Joseph Redmon and Ali Farhadi in the article <a href="https://arxiv.org/pdf/1612.08242v1.pdf">“YOLO9000: Better, Faster, Stronger”</a>. The name with number 9000 was given, because of the model's ability to predict even 9000 different objects categories and still run in real-time. The novel model version was not only trained simultaneously on object detection and classification datasets but also gained Darknet-19 as the new baseline model.
Since YOLOv2 was also a huge success and became the next state-of-the-art object detection model, more and more engineers began to experiment with this algorithm and create their own, diverse YOLO versions. Some of them will be mentioned throughout the article.
<h3>YOLOv3</h3>
The <a href="https://pjreddie.com/darknet/yolo/">new version</a> of the algorithm was released in 2018 by Joseph Redmon and Ali Farhadi in the article <a href="https://arxiv.org/pdf/1804.02767.pdf">"YOLOv3: An Incremental Improvement"</a>. It was based on the Darknet-53 architecture.
In YOLOv3, the softmax activation function was replaced with independent logistic classifiers. During training, the binary cross-entropy loss was used. The Darknet-19 architecture was improved and changed into Darknet-53, with 53 convolutional layers. Besides that, the predictions were made on three different scales that improved YOLOv3’s AP in predicting small objects.
<img class="size-full wp-image-12331" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0224d8587215be0b16980_7-5.webp" alt="Image 7 - Darknet53 architecture" width="990" height="1376" /> Darknet53 architecture
YOLOv3 was the last YOLO variant created by Joseph Redmon, who decided to never work on any upgrades of YOLO (or even in the computer vision field), in order to prevent his work’s negative impact on the world. Nowadays, it is mostly used as the baseline for developing novel object-detection architectures.
<img class="size-full wp-image-12333" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0224e0ce0b7ac06529ffd_8-4.webp" alt="Image 8 - YOLOv3 performance" width="2038" height="1624" />YOLOv3 performance
<h3>YOLOv4</h3>
The <a href="https://github.com/AlexeyAB/darknet">fourth version</a> of the YOLO algorithm was released in April 2020 by Alexey Bochkovskiy, Chien-Yao Wang, and Hong-Yuan Mark Liao in their article <a href="https://arxiv.org/pdf/2004.10934.pdf">"YOLOv4: Optimal Speed and Accuracy of Object Detection"</a>. It was based on the SPDarknet53 architecture and introduced a fair amount of new concepts, such as <em>Weighted Residual Connections</em>, <em>Cross-Stage-Partial connections</em>, <em>cross mini-batch normalization</em>, <em>self-adversarial training</em>, <em>mish activation</em>, <em>dropblock</em>, and <em>CIoU loss</em>.
YOLOv4 is the continuation of the YOLO family but was created by different scientists (not Joseph Redmon and Ali Farhadi). Its architecture consists of SPDarknet53 backbone, spatial pyramid pooling, PANet path-aggregation as neck, and YOLOv3 head.
As a result, YOLOv4 reaches 10% higher Average Precision and 12% better Frames Per Second metrics, compared to its ancestor, YOLOv3.
<h3>YOLOv5</h3>
The <a href="https://github.com/ultralytics/yolov5">fifth iteration</a> of the most popular object detection algorithm was released shortly after YOLOv4, but this time by <a href="https://docs.ultralytics.com/">Glenn Jocher</a>. First time ever, YOLO used the PyTorch deep learning framework, which aroused a lot of controversy among the users.
The official article couldn’t be announced, because YOLOv5 does not implement or invent any novel techniques. It is just the PyTorch extension of YOLOv3. Such a situation was used by Ultranytics company and spread the word about the “new YOLO” version under its patronage. The fact is, that the YOLOv5 webpage is very clear and nicely built and written, with a lot of tutorials and tips on training and using the YOLOv5 models because there are also five pre-trained models available, ready for use.
<img class="size-full wp-image-12335" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b02250fd01060a37e08e98_9-4.webp" alt="Image 9 - YOLOv5 subversions" width="2536" height="840" /> YOLOv5 subversions
Due to some researchers, YOLOv5 outperforms both YOLOv4 and YOLOv3, but its speed is similar to YOLOv4.
<img class="size-full wp-image-12337" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b02251b90a6b94c7bd850e_10-4.webp" alt="Image 10 - YOLOv5 performance" width="2536" height="1258" /> YOLOv5 performance
<h3>PP-YOLOv1 and v2</h3>
The <a href="https://github.com/PaddlePaddle/PaddleDetection">PP-YOLO algorithm</a> was released in July 2022 by Xiang Long et al. and Baidu team in the articles <a href="https://arxiv.org/pdf/2007.12099.pdf">"PP-YOLO: An Effective and Efficient Implementation of Object Detector"</a> and <a href="https://arxiv.org/pdf/2104.10419.pdf">"PP-YOLOv2: A Practical Object Detector"</a>. It uses the ResNet50-vd architecture as a backbone and introduces many new features, such as <em>larger batch size</em>, <em>EMA</em>, <em>dropblock</em>, <em>IoU loss</em>, <em>IoU aware</em>, <em>grid sensitive</em>, <em>matrix NMS</em>, <em>CoordConv</em>, and <em>Spatial Pyramid Pooling</em>.
PP-YOLO’s name “PP” means “Paddle-Paddle” (Parallel Distributed Deep Learning), which is an open-source deep learning platform, also developed by the Baidu team and used for this project’s needs. They used YOLOv3 as the baseline model and incorporated some tricks, in order to obtain a better balance between effectiveness and efficiency, surpassing, for example, YOLOv4. The original backbone was replaced (DarkNet-53) with ResNet50-vd. The construction of FPN and head are the same as in YOLOv3 architecture.
The latest version, with even more improvements, but mostly focused on performance, than speed, <a href="https://arxiv.org/abs/2104.10419">PP-YOLOv2</a>, again authored by the Baidu team, was released in April 2021.
<img class="size-full wp-image-12339" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b270b13079d36ded1d64e4_11-3.webp" alt="Image 11 - PP-YOLO performance" width="2210" height="1524" /> PP-YOLO performance
<h3>YOLOP (You Only Look Once for Panoptic Driving Perception)</h3>
The <a href="https://github.com/hustvl/YOLOP">YOLOP version</a> was released in August 2021 by Xinggang Wang et al. in their article <a href="https://arxiv.org/pdf/2108.11250.pdf">"YOLOP: You Only Look Once for Panoptic Driving Perception"</a>. It uses CSPDarknet architecture as a backbone and packs a ton of new features, including <em>panoptic driving system</em>, <em>1 encoder and 3 decoders for traffic object detection</em>, <em>lane detection</em>, and <em>drivable area segmentation</em>.
It was completely different from other YOLO versions announced in the same year. YOLOP is tailored especially for panoptic driving tasks, not for a wide object-detection. It is the real-time perception system, well-tested in real situations, ready to assist a car in making reasonable decisions while driving.
YOLOP is composed of one shared encoder for feature extraction and three decoders to handle the specific tasks - traffic object detection, lane detection and drivable area segmentation at the same time.
As usual, the encoder consists of the backbone part, which extracts the features of the input image and the neck part, which merges features generated by the backbone. The backbone network is based on architecture similar to YOLOv4 - CSPDarknet, while the neck is composed of Spatial Pyramid Pooling (SPP) module and Feature Pyramid Network (FPN) module. The three decoders (Detect Head, Drivable Area Segment Head and Lane Line Segment Head) have a similar architecture to those in YOLOv4, only Drivable Area Segment Head and Lane Line Segment Head have, in the upsampling layer, the nearest interpolation, instead of deconvolution, in order to reduce computation cost.
<img class="size-full wp-image-12341" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b270a95f867c2801dc89ff_12-1.webp" alt="Image 12 - Nearest neighbor interpolation scheme" width="1900" height="1188" /> Nearest neighbor interpolation scheme
YOLOP achieves state-of-the-art on the three tasks of the BDD100K dataset in terms of accuracy and speed. It was the first model to perform the three panoptic perception tasks simultaneously in real-time on an embedded device (Jetson TX2) and achieves state-of-the-art performance.
<img class="size-full wp-image-12343" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b2709960ce4d025acde638_13-1.webp" alt="Image 13 - YOLOP architecture" width="2284" height="1236" /> YOLOP architecture
<h3>YOLOX</h3>
Another YOLO flavor - <a href="https://github.com/Megvii-BaseDetection/YOLOX">YOLOX</a> - was released in August 2021 by Jian Sun et al. in their article <a href="https://arxiv.org/pdf/2107.08430.pdf">"YOLOX: Exceeding YOLO Series in 2021"</a>. It was based on the DarkNet53 architecture and SPP layer. Like other flavors, YOLOX also came with a bunch of characteristic features, such as <em>anchor-free mechanism</em>, <em>decoupled head</em>, <em>SimOTA</em>, <em>EMA weights updating</em>, <em>cosine lr schedule</em>, <em>IoU loss</em>, and <em>IoU-aware branch</em>.
The authors used YOLOv3 architecture (Darknet53 with SPP layer) as the starting point for developing a novel model. The experiments indicated that the coupled detection head may have a negative impact on the performance and replacing the head with a decoupled one greatly improved the converging speed. The decoupled head consists of a 1×1 convolution layer that reduces the channel dimension and two parallel branches with two 3×3 convolutional layers.
<img class="size-full wp-image-12345" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b270bd60ce4d025ace025d_14.webp" alt="Image 14 - YOLOv3 head vs. the proposed decoupled head" width="1956" height="1156" /> YOLOv3 head vs. the proposed decoupled head
YOLOX uniqueness results from the decision to drop the construct of box anchors, which improves the computation cost and inference speed. In an anchor-free mechanism, the predictions for each location are reduced from 3 to 1 and directly predict four values (two offsets in terms of the left-top corner of the grid, and the height and width of the predicted box).
Then, the center location of every object is assigned as the positive one (only one for each object, ignoring other high-quality predictions), and a scale range pre-defined. These changes reduced the parameters and GFLOPs of the detector and obtained better performance in AP.
Besides that, the authors present a novel SimOTA, an advanced label assignment technique, which treats the assigning procedure as an Optimal Transport (OT) problem and gets state-of-the-art performance among the assigning strategies.
YOLOX won the 1st Place in Streaming Perception Challenge (Workshop on Autonomous Driving at CVPR 2021) using a single YOLOX-L model.
<h3>YOLOF (You Only Look One-level Feature)</h3>
The <a href="https://github.com/megvii-model/YOLOF">YOLOF</a> model was released in March 2021 by Jian Sun’s team in their article <a href="https://arxiv.org/pdf/2103.09460.pdf">"You Only Look One-level Feature"</a>. Their flavor used ResNet/ResNeXt architectures pre-trained on the ImageNet dataset. It came with a couple of new features, such as <em>single-level features</em>, <em>Dilated Encoder</em>, and <em>Uniform Matching</em>.
This time, the authors focused on adopting SiSo (single-in, single-out) encoders, instead of complex feature pyramids for detection, in order to lower the computational cost. Such a task turned out to have two main problems:
<ol><li>The detection performance of SiSo drops due to the range of scales (of objects in the picture) that such features can cover, they are more limited than MiMo</li><li>Imbalance problem with positive anchors connected with sparse anchors in single-level features</li></ol>
<img class="size-full wp-image-12347" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b270bd73d39a6e45ab3309_15.webp" alt="Image 15 - MiMo vs. SiMo vs. MiSo vs. SiSo encoders" width="2042" height="1072" /> MiMo vs. SiMo vs. MiSo vs. SiSo encoders
Two novel components in this YOLOF architecture are Dilated Encoder and Uniform Matching, which solve the above problems and bring considerable improvements in performance.
Dilated Encoder consists of the Projector (1x1 convolution layer to reduce dimension, then one 3x3 convolution layer to improve semantic contexts) and the Residual Blocks (four blocks with different dilation rates, to generate output features with multiple receptive fields, covering all objects’ scales).
<img class="size-full wp-image-12349" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b270bedd103fc6f103241c_16.webp" alt="Image 16 - Structure of a Dilated Encoder" width="1504" height="420" /> Structure of a Dilated Encoder
Uniform Matching is used to take care of the sparse anchors' problem. First, the k nearest anchors are set as positive ones, for every ground-truth box, so that all ground-truth boxes have the same number of positive anchors regardless of their sizes. Balance in positive samples makes sure that all ground-truth boxes participate in training and contribute equally.
Thanks to these changes, YOLOF is able to achieve comparable results with its feature pyramids rival - RetinaNet, while being 2.5x faster. YOLOF-DC5 can be 13% faster than YOLOv4 with a 0.8 mAP improvement in overall performance.
<img class="size-full wp-image-12351" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b270bef15c2f7b558f756d_17.webp" alt="Image 17 - YOLOF architecture" width="2676" height="950" /> YOLOF architecture
<h3>YOLOS (You Only Look at One Sequence)</h3>
A transformed-based YOLO version - <a href="https://github.com/hustvl/YOLOS">YOLOS</a> - was released in October 2021 by Wenyu Liu et al. in their paper <a href="https://arxiv.org/pdf/2106.00666.pdf">"You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection"</a>. It was built on an entirely different architecture and brought to light new features, such as <em>ViT transformers</em> and <em>DET tokens</em>.
The authors emphasize that YOLOS is, at this moment, a proof-of-concept with no performance optimizations, its purpose was to prove the Transformers' versatility and transferability to the more challenging object detection task, than just image recognition.
YOLOS consists of a series of object detection models based on the ViT architecture with the fewest possible modifications and inductive biases.
For the pre-training dataset was used the mid-sized ImageNet-1, and the vanilla ViT architecture, with the fewest possible modifications. For the first time, the 2D object detection was performed in a pure sequence-to-sequence manner by taking a sequence of fixed-sized non-overlapping image patches.
One of the important changes was adopting [DET] tokens as proxies for object representations to avoid inductive biases about 2D structures and prior knowledge about the task. Besides that, image classification loss in ViT is replaced with the bipartite matching loss to perform object detection in a set prediction manner.
<img class="size-full wp-image-12353" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b270bf0295be067d7d8427_18.webp" alt="Image 18 - YOLOS architecture" width="2132" height="1348" /> YOLOS architecture
A small variant of YOLOS (YOLOS-Ti) was able to achieve impressive performance compared to the highly-optimized object detectors; unfortunately, the larger YOLOS models were less competitive.
<img class="size-full wp-image-12355" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b270c0dd103fc6f10324f1_19.webp" alt="Image 19 - Self-attention map visualization" width="2588" height="770" /> Self-attention map visualization
<h3>YOLOR (You Only Learn One Representation)</h3>
The final YOLO flavor we'll take a look at today is <a href="https://github.com/WongKinYiu/yolor">YOLOR</a>, and it was released in May 2021 by Hong-Yuan M. Liao et al. in their article <a href="https://arxiv.org/pdf/2105.04206.pdf">"You Only Learn One Representation: Unified Network for Multiple Tasks"</a>. It introduced two distinctive features - <em>implicit and explicit knowledge</em>.
The authors decided to use a completely novel approach and introduce both implicit and explicit knowledge into the neural network that is responsible for object-detection tasks, in order to improve its performance. The unified neural network integrates implicit knowledge and explicit knowledge and enables the learned model to contain a general representation that can complete various tasks.
To train such a unified network, the explicit and implicit knowledge were incorporated together to model the error term and then used to guide the multi-purpose network training process. As the baseline model YOLOv4- CSP was taken.
The implicit knowledge was applied in three places:
<ol><li style="font-weight: 400;" aria-level="1">Feature alignment for FPN</li><li style="font-weight: 400;" aria-level="1">Prediction refinement</li><li style="font-weight: 400;" aria-level="1">Multi-task learning in a single model (tasks included: object detection, multi-label image classification, and feature embedding)</li></ol>
<img class="size-full wp-image-12357" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b270c1a7d79377530961a4_20.webp" alt="Image 20 - YOLOR architecture" width="1578" height="1304" /> YOLOR architecture
YOLOR reached similar accuracy as Scaled-YOLOv4-P7 on object detection, but the inference speed increased by 88%:
<img class="size-full wp-image-12359" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b270c360ce4d025ace067a_21.webp" alt="Image 21 - YOLOR performance" width="2576" height="1664" /> YOLOR performance
<h3>YOLO Models Summary</h3>
Out of the five models released in 2021 (YOLOP, YOLOX, YOLOF, YOLOS, YOLOR), only YOLOR and YOLOX are in the top 10 models of the COCO benchmark. The first place, based on box AP, belongs to YOLOR, the second still to YOLOv4, and the third to PP-YOLO. YOLOF slightly misses the top ten places.
YOLOS models are proofs of concept, which at this moment, can’t compete with the state-of-the-art methods. Meanwhile, the YOLOP model is based on a completely different concept of the panoptic perception system and can’t be compared with more general object detection models.
<img class="size-full wp-image-12361" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b2718abf47c90a02cab24d_23.webp" alt="Image 22 - YOLOs performance comparison on COCO dataset" width="2586" height="1464" /> YOLOs performance comparison on COCO dataset
<h2 id="anchor-3"><b>Darknet: a YOLO implementation</b></h2>
There are a few different implementations of the YOLO algorithm on the web. Darknet is one such open-source neural network framework. Darknet was written in the C Language and CUDAtechnology, which makes it really fast and provides for making computations on a GPU, which is essential for real-time predictions.
If you're curious about other examples of the YOLO algorithm in action, you can take a look at a PyTorch implementation or check out YOLOv3 with some extra fast.ai functionality. For a complete overview, explore the Keras implementation.
<img class="size-full wp-image-1131" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0226092a26c843b64b7fe_darknet.webp" alt="Darknet logo" width="2700" height="2669" /> Darknet logo
Installation is simple and requires running just 3 lines of code (in order to use GPU it is necessary to modify the settings in the Makefile script after cloning the repository). For more details, see the Darknet installation instructions.
<script src="https://gist.github.com/darioappsilon/dac5987bdee1c6e8ed8a3a9e0bb127d9.js"></script>
After installation, we can use a pre-trained model or build a new one from scratch. For example, here’s how you can detect objects on your image using a model pre-trained on the <a href="http://cocodataset.org/#home" target="_blank" rel="noopener noreferrer">COCO dataset</a>:
<script src="https://gist.github.com/darioappsilon/baa993f95931f5f79acdbfb78c87e786.js"></script>
<img class="size-full wp-image-12364" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b0226145aea1a61422cf65_toy.webp" alt="YOLO output for toy classification" width="1600" height="868" /> YOLO output
<p style="text-align: left;">As you can see in the image above, the algorithm deals well even with object representations.</p>
<img class="size-full wp-image-4385" src="https://webflow-prod-assets.s3.amazonaws.com/6525256482c9e9a06c7a9d3c%2F65b02263abe86ac8591a0d1a_yolo.webp" alt="pp-yolo Appsilon co-founder Damian Rodziewicz" width="1024" height="682" /> YOLO output (2)
If you want to see more, go to the <a href="https://pjreddie.com/darknet/yolo/" target="_blank" rel="noopener noreferrer">Darknet website</a>.
<iframe src="https://www.youtube.com/embed/yQwfDxBMtXg" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen" data-mce-fragment="1"></iframe>
<blockquote>You don't have to build your Machine Learning model from scratch. In fact, it's usually better not to. Read our <a href="https://appsilon.com/transfer-learning-introduction/" target="_blank" rel="noopener noreferrer">Introduction to Transfer Learning</a> to find out why.</blockquote>
<em>This article was originally written by Michał Maj with further contributions from the <a href="https://appsilon.com/computer-vision/" target="_blank" rel="noopener noreferrer">Appsilon ML team</a>.</em>
<hr />
<h2 id="redmon">Resources</h2>
<ul><li>Need help with ML solutions? Reach out to <a href="https://appsilon.com/computer-vision/" target="_blank" rel="noopener noreferrer">Appsilon</a></li><li style="font-weight: 400;"><a href="https://appsilon.com/ship-recognition-in-satellite-imagery-part-ii/" target="_blank" rel="noopener noreferrer">Ship recognition in satellite imagery</a></li><li style="font-weight: 400;"><a href="https://appsilon.com/object-recognition-transfer-learning/" target="_blank" rel="noopener noreferrer">Recognizing animals in photos</a></li><li style="font-weight: 400;"><a href="https://appsilon.com/ai-for-assisting-in-natural-disaster-relief-efforts-the-xview2-competition/" target="_blank" rel="noopener noreferrer">Natural disaster relief assistance</a></li><li style="font-weight: 400;"><a href="https://appsilon.com/using-ai-identify-wildlife-camera-trap-images-serengeti/" target="_blank" rel="noopener noreferrer">Identifying wildlife in the Serengeti</a></li><li style="font-weight: 400;"><a href="https://appsilon.com/ai-for-wildlife-image-classification-appsilon-ai4g-project-receives-google-grant/" target="_blank" rel="noopener noreferrer">Assisting biodiversity conservation efforts in Gabon</a></li></ul>
<h3>Follow Appsilon for More</h3><ul><li>Appsilon is hiring! See open roles <a href="https://appsilon.com/careers/" target="_blank" rel="noopener noreferrer">here</a>.</li><li style="font-weight: 400;">Follow <a href="https://twitter.com/appsilon" target="_blank" rel="noopener noreferrer">@Appsilon</a> on Twitter</li><li style="font-weight: 400;">Follow Appsilon on <a href="https://www.linkedin.com/company/appsilon" target="_blank" rel="noopener noreferrer">LinkedIn</a></li><li style="font-weight: 400;">Try out our R Shiny <a href="http://shiny.tools" target="_blank" rel="noopener noreferrer">open source</a> packages</li></ul>
Explore Possibilities
Share Your Data Goals with Us
From advanced analytics to platform development and pharma consulting, we craft solutions tailored to your needs.
Talk to our Experts