Comments (3)
Hi @einareinarsson,
As your txt files with detections have the same name as the ground-truth annotations, your structure looks fine.
Please, try the following steps:
- Update the repository. I made some updates in the code.
- Change the class_id of your predictions files (the first element) to integers. For example, instead of being:
6.0 0.9641061425209045 0.7914067506790161 0.36787542700767517 0.07676965743303299 0.20017971098423004
it will be
6 0.9641061425209045 0.7914067506790161 0.36787542700767517 0.07676965743303299 0.20017971098423004
- For both ground-truth and detections, select a folder containing your image files.
- For both ground-truth and detections, choose a file listing your clases. In this file, the order of the classes must follow the <class_id> of your txt files. An example of this file can be seen here, where 'aeroplane' is class_id=0, 'bicycle' is class_id=1, and so on.
- For ground-truth coordinates format, choose
(*) YOLO (.txt)
. For the coordinates format for the detections, select the first option:(*) <class_id> <confidence> <x_center> <y_center> <width> <height> (RELATIVE)
Then I believe you will be able to solve your problem and evaluate your detections.
Also check if you are able to see the ground-truth and detection statistics by clicking on the buttons show ground-truth statistics
and show detections statistics
. If not, please attach here some of your files (detection, groundtruth, images and txt file with list of your classes) so I will be able to investigate the problem.
The images are needed because as your bounding boxes are expressed in relative values, to know their exact position within the image it is necessary to know the image width and height. That's why you need to inform the images. :)
from review_object_detection_metrics.
Thanks for the answer.
Meanwhile I discovered that the root problem was the complex filenames. When original filenames like 954jhj53hjhj822v_jpg.rf.3232xxvcb432ijoffg.txt
were renamed to likes than 954jhj53hjhj822v.txt
it works.
Maybe just two feature wishes:
- Visualization option for predictions over images (like in case of ground truths)
- Export option for metrics data into file
from review_object_detection_metrics.
Hi @einareinarsson ,
Good you got it working.
About your suggestions:
- Visualizing the predictions over the images (like is done with groundtruths) is already implemented. After choosing the information related to your detections, click on the button
show detections statistics
. There you will see both detections and ground-truth bounding boxes. You could also save those images with the bounding boxes on. - The plots containing the results are saved in the
output
folder. Besides the plots, the user could copy the results from theResult
and paste in an external text file. But good idea. It is better if the tool can save automatically the results in a text file in output folder.
Thanks
from review_object_detection_metrics.
Related Issues (20)
- Equation (9) and (11) in publication HOT 3
- how to use commond line ? HOT 1
- Do you have the detection results of different detection models in coco json format. HOT 1
- could't open detected the annotation file made by yolov4 darknet HOT 2
- How to calculate the general Precision x recall curve of the detection model? HOT 1
- Coordinates format has repeated option HOT 4
- Can you share commands on how to run the coco and PascalVOC evaluators via terminal to generate the outputs? HOT 3
- No results HOT 4
- Feature request: Docker containerization HOT 3
- move(self, int, int): argument 1 has unexpected type 'float' HOT 5
- ModuleNotFoundError: No module named 'src' HOT 4
- Bug in the converter.py while looking for images folder ? HOT 1
- AttributeError: 'list' object has no attribute 'items' HOT 1
- Please provide command line to evaluate on terminal HOT 1
- Issue during start: TypeError: arguments did not match any overloaded call HOT 3
- np.bool is deprecated HOT 5
- numpy: set_window_title moved to manager HOT 3
- Building the conda environment causes an endless loop HOT 2
- Result is 0 HOT 3
- Getting precision, recall, F1 values HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from review_object_detection_metrics.