Comments (7)
It's perfectly fine to ask here. Give me a sec and I will explain it to you.
Could you please provide the plot of the AP of the class Monosit
in both cases? That will help me explain!
from map.
Thanks for your response.
monosit AP for threshold 0.01 and 0.5
from map.
So, first of all, I recommend you to see this video:
mean Average Precision
Basically, the mAP is a single-number metric used to evaluate rankings.
Well, this is great for object recognition detectors since the predictions are associated with a confidence level and allows us to rank these predictions from higher to lower confidence and then get the mAP value.
In practice the higher the confidence of a detection (from 0% to 100%) the more important it will be. Specifically what happens at rank 1 is twice as important as what happens at rank 2. So it tells how good is your detector taking into account the confidence for each prediction.
The AP is calculated by the area (shown in blue
) of each of the plots above. Each of the points in that graph correspond to a prediction and they are ordered by confidence. These points go down when there is a false detection and up when there is a true detection. In the gaph of the left the blue dot didn't go down 33 times (corresponding to the true detections) and in the graph of the right 32 times.
As you can see from the left plot the false predictions are all concentrated in the end and probably associated with low confidence levels, meaning that in terms of mAP you have a very good model. In this case if you find the right threshold for this class that will remove those last points in the end (try for example a threshold of 0.1) you can even get a higher AP. If you get creative you can even find the right threshold for each class.
Are there any other way to evaluate object detection task ?
mAP is the standard metric used in the research papers. Your model seems to be working very well (in fact it even seems too good to be true). You can also have a look at other metrics like ROC curve.
from map.
This is a great explanation, now i have more good intuition about mAP
In this case if you find the right threshold for this class that will remove those last points in the end (try for example a threshold of 0.1) you can even get a higher AP. If you get creative you can even find the right threshold for each class.
I have try 0.1, 0.05, 0,03, 0,02, and 0,01 threshold, the best mAP is 0.02 (93.33%) this is better 0.01% than 0.01 (93.32%), but i think mAP doesn't significantly decrease to the increasement of false prediction (for low rank/confidence) , am i right ?
and what do you think about F1-value for object detection evaluation ?
from map.
Yeah, you are right, it didn't make much difference since they are the last ranks!
Well, it truly depends on your application, the F1 value is used in the ROC curves so watch some videos about it (there are great ones on youtube). Basically, it depends on how many false detections do you want to allow.
First, try to understand what's Precision and Recall. F1 is just a way to balance them both.
from map.
ok thank you so much sir, i'll learn more
from map.
In this case if you find the right threshold for this class that will remove those last points in the end (try for example a threshold of 0.1) you can even get a higher AP. If you get creative you can even find the right threshold for each class.
Hello @Cartucho ,
How is it possible to change/adjust the conf. threshold in your repo please?
from map.
Related Issues (20)
- Having trouble putting inputs in correct format HOT 1
- Query regarding Average Precision Curve
- If there are no targets being detected, these pictures will not be put into calculation. Will this influence the accuracy? HOT 1
- map is 0 HOT 1
- Incorrect mAP when boxes are normalized to [0-1]
- map for validation set?
- How to use this py to calculate coco dataset‘s mAp?
- how to set up individual IoU threshold value?
- Segmentation evaluation?
- 有问题询问
- Does the order make an effect? HOT 4
- How to get the Map 0.5:0.95 HOT 2
- Animation with big images
- different mAP with Alexey darknet HOT 2
- I'm confused, there is no approximation ?
- Map goog
- mAP for each frame HOT 1
- Deprecation warnings with Matplotlib 3.4 HOT 1
- Does it accept empty file?
- convert_dr_yolo.py script not creating any files in detection-results folder
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from map.