Code Monkey home page Code Monkey logo

what-is-map-for-objects-detection-tasks-'s Introduction

What is mAP for Objects Detection tasks?

mAP as a main metric for Objects Detection

๐ŸŽ“ Related Course where mAP is used

Training YOLO v3 for Objects Detection with Custom Data. Build your own detector by labelling, training and testing on image, video and in real time with camera. Join here: https://www.udemy.com/course/training-yolo-v3-for-objects-detection-with-custom-data/

Detections on Images

๐Ÿšฉ Concept Map of the Course

Concept Map of the Course

๐Ÿ‘‰ Join the Course

https://www.udemy.com/course/training-yolo-v3-for-objects-detection-with-custom-data/


Content


mAP (mean average precision) is a metric used to evaluate accuracy, in our case, for Objects Detection tasks. In general, to calculate mAP for a custom model that is trained for Objects Detection tasks, firstly, Average Precision is calculated for every class in the custom model. Then, mean of these calculated Average Precisions across all classes gives mAP. Pay attention! Some papers use Average Precision and mAP interchangeably.


Understanding the calculation process of Average Precision needs to update knowledge of definitions for used parameters.

  • Threshold is used to identify whether prediction of Bounding Box (BB) can be considered as True or False. Usually threshold is set to one of the following: 50%, 75%, 95%.

  • Intersection Over Union (IoU) is a measure that is used to evaluate overlap between two Bounding Boxes (BB). IoU shows how much predicted BB overlaps with so called Ground Truth BB (the one that has real object inside). Comparing IoU with threshold it is possible to define whether predicted BB is True Positive (valid in other words) or False Positive (not valid). IoU is calculated by overlapping area between predicted BB and Ground Truth BB divided by union area of two BB as shown on the Figure below. Intersection Over Union (IoU)

  • True Positive (TP) is a number of BB with correct predictions, IoU โ‰ฅ threshold

  • False Positive (FP) is a number of BB with wrong predictions, IoU < threshold

  • False Negative (FN) is a number of Ground Truth BB that are not detected

  • True Negative (TN) is a number of BB that are correctly not predicted (as many as possible within an image but not overlap any Ground Truth BB); this parameter is not used for calculating metrics

  • Precision represents percentage of correct positive predictions of BB (how accurate are predicted BB) and shows an ability of the trained model to detect relevant objects. Precision is calculated as following: Precision

  • Recall represents percentage of True Positive predictions of BB among all relevant Ground Truth BB and shows an ability of the trained model to detect all Ground Truth BB. Recall is calculated as following: Recall

  • Precision and Recall curve represents performance of the trained model by plotting a curve of Precisions values against Recalls values and form a kind of zig-zag graph as shown on Figure below. Precision and Recall curve

In order to plot Precision and Recall curve, it is needed to collect detected BB by their confidences in descending order. Then, calculate Precision and Recall for every detected BB as it is shown in Table below. In current example, threshold is set to 50% saying that predicted BB is correct if IoU โ‰ฅ 0.5. Total number of correct predictions TP = 5 and total number of wrong predictions FP = 5.

BB Confidence TP or FP Precision Recall
1 96% TP 1/1 = 1 1/5 = 0.2
2 94% FP 1/2 = 0.5 1/5 = 0.2
3 90% TP 2/3 = 0.67 2/5 = 0.4
4 89% TP 3/4 = 0.75 3/5 = 0.6
5 81% FP 3/5 = 0.6 3/5 = 0.6
6 75% TP 4/6 = 0.67 4/5 = 0.8
7 63% TP 5/7 = 0.71 5/5 = 1
8 59% FP 5/8 = 0.62 5/5 = 1
9 54% FP 5/9 = 0.56 5/5 = 1
10 51% FP 5/10 = 0.5 5/5 = 1

AP is calculated by considering area under Interpolated Precision and Recall curve. Firstly, Recall values are divided into 11 points as following: [0, 0.1, 0,2 โ€ฆ 1] as shown on Figure below. Interpolated Precision and Recall curve

Then, average of maximum precision values is computed for these 11 Recall points. Average

From our example, AP will be calculated as following:
AP = (1/11) * (1+1+1+0.75+0.75+0.75+0.75+0.71+0.71+0.71+0.71) = 0.81


MIT License

Copyright (c) 2020 Valentyn N Sichkar

github.com/sichkar-valentyn

what-is-map-for-objects-detection-tasks-'s People

Contributors

sichkar-valentyn avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.