Code Monkey home page Code Monkey logo

Comments (5)

hasanirtiza avatar hasanirtiza commented on June 4, 2024

Can you paste the full command you used to test this model ?

from pedestron.

lazrak-mouad avatar lazrak-mouad commented on June 4, 2024

Yes of course,

The full command : python tools/test_crowdhuman.py configs/elephant/crowdhuman/cascade_hrnet.py models_pretrained/epoch_ 19 20 --out result.json

PS-1 : to avoid that the programme sleeps, I created an epoch_20.pth.stu which is a copy of epoch_19.pth.stu.

PS-2 : I've reduced the number of workers to 0 to avoid shm surcharging.

Results :
fpp: 0.01, score: 0.9979423880577087
fpp: 0.0178, score: 0.9969077706336975
fpp: 0.0316, score: 0.9949955940246582
fpp: 0.0562, score: 0.9920675754547119
fpp: 0.1, score: 0.9861095547676086
fpp: 0.1778, score: 0.9754815697669983
fpp: 0.3162, score: 0.9545246362686157
fpp: 0.5623, score: 0.9163613319396973
fpp: 1.0, score: 0.8399631381034851
ori mean [0.79573628 0.75040029 0.69687723 0.64711036 0.58707189 0.52618753
0.46099312 0.40200195 0.34641451]
mean [-0.22848745 -0.28714849 -0.36114602 -0.43523843 -0.53260799 -0.64209761
-0.77437216 -0.91129833 -1.06011922]
real mean -0.5813906340696805
ori mean [0.79573628 0.75040029 0.69687723 0.64711036 0.58707189 0.52618753
0.46099312 0.40200195 0.34641451]
mean [-0.22848745 -0.28714849 -0.36114602 -0.43523843 -0.53260799 -0.64209761
-0.77437216 -0.91129833 -1.06011922]
real mean -0.5813906340696805
ori mean [0.79573628 0.75040029 0.69687723 0.64711036 0.58707189 0.52618753
0.46099312 0.40200195 0.34641451]
mean [-0.22848745 -0.28714849 -0.36114602 -0.43523843 -0.53260799 -0.64209761
-0.77437216 -0.91129833 -1.06011922]
real mean -0.5813906340696805
ori mean [0.79573628 0.75040029 0.69687723 0.64711036 0.58707189 0.52618753
0.46099312 0.40200195 0.34641451]
mean [-0.22848745 -0.28714849 -0.36114602 -0.43523843 -0.53260799 -0.64209761
-0.77437216 -0.91129833 -1.06011922]
real mean -0.5813906340696805
[0.5591202939538293, 0.5591202939538293, 0.5591202939538293, 0.5591202939538293]
Checkpoint 19: [Reasonable: 55.91%], [Bare: 55.91%], [Partial: 55.91%], [Heavy: 55.91%]

PS-3 : The AP = 12.4 I've got it by using an other repo, not the official test file.

from pedestron.

hasanirtiza avatar hasanirtiza commented on June 4, 2024

You need to run test.py or look at the end of README.md, regarding on how to run test for CrowdHuman on :

./tools/test.py configs/elephant/crowdhuman/cascade_hrnet.py ./models_pretrained/epoch_19.pth.stu 8 --out CrowdHuman12.pkl --eval bbox

or this for mgpus:

./tools/dist_test.sh configs/elephant/crowdhuman/cascade_hrnet.py ./models_pretrained/epoch_19.pth.stu 8 --out CrowdHuman12.pkl --eval bbox

from pedestron.

lazrak-mouad avatar lazrak-mouad commented on June 4, 2024

Thank you so much for the guidelines.

After running the following command : Pedestron# ./tools/dist_test.sh configs/elephant/crowdhuman/cascade_hrnet.py ./models_pretrained/epoch_19.pth.stu 1 --out CrowdHuman12.pkl --eval bbox , I've got the following results :

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.536
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.840
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.575
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.421
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.534
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.561
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.035
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.278
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.627
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.560
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.621
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.645

Is there an explanation for the multiple values for AP and AR ?

Thank you in advance.

from pedestron.

hasanirtiza avatar hasanirtiza commented on June 4, 2024

Read coco evaluation protocol in detail.

from pedestron.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.