Code Monkey home page Code Monkey logo

coco-analyze's People

Contributors

fran6co avatar matteorr avatar mrronchi avatar willbrennan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

coco-analyze's Issues

KeyError for analyzing 2017 dataset

I ran run_analysis.py on output of 2017 keypoint challenge and got this error

Traceback (most recent call last):
  File "run_analysis.py", line 127, in <module>
    main()
  File "run_analysis.py", line 115, in main
    paths = occlusionAndCrowdingSensitivity( coco_analyze, .75, saveDir )
  File "/home/kamal/dev/coco-analyze/analysisAPI/occlusionAndCrowdingSensitivity.py", line 49, in occlusionAndCrowdingSensitivity
    total_gts += overlap_index[no]
KeyError: 7

Has something changed from 2014 keypoint challenge?

Requirements and update to cocoeval.py

Thank you for this great repo! The requirements should be updated. I had to try several different package versions to get this running. Below is the environment I got it working with. Also, cocoeval.py should be updated with this fix.

Output of $ pip freeze:

certifi==2021.10.8
colour==0.1.5
cycler==0.11.0
Cython==0.29.27
fonttools==4.29.1
imageio==2.15.0
Jinja2==3.0.3
kiwisolver==1.3.2
MarkupSafe==2.0.1
matplotlib==3.5.1
networkx==2.6.3
numpy==1.21.5
packaging==21.3
Pillow==9.0.1
pycocotools==2.0.2
pyparsing==3.0.7
python-dateutil==2.8.2
PyWavelets==1.2.0
scikit-image==0.18.0
scipy==1.1.0
six==1.16.0
tifffile==2021.11.2

Replacing attribute 'set_axis_bgcolor' with 'set_facecolor'

Hi @matteorr, thanks for your great work. When I run the tool, I meet a problem that 'AxesSubplot' object has no attribute 'set_axis_bgcolor' in the file of 'scoringErrors.py'. I noticed that this attribute is already replaced by 'set_facecolor'. So it is better to update this attribute in multiple scripts. Thanks.

Swapped keypoints

Hi @matteorr , could you illustrate how to compute the swapped keypoints in your code? I notice that in line 326 of cocoanalyze.py, the swapped keypoints are computed as:
# swapped keypoints are those that have oks => 0.5 but on keypoint of other person swap_kpts = np.logical_and.reduce((oks_max >= self.params.jitterKsThrs[0], oks_argmax != 0, oks_argmax != num_anns)) swap_kpts = np.logical_and(swap_kpts, gt_kpt_v != 0)*1
but I am not understand the meaning of 'oks_argmax != 0 & oks_argmax !=num_anns' at here.
Thank you.

Big difference between COCOEvaluator and coco_analyze.evaluate on custom dataset

Hello, I just found out your great repository and I would like to use it and cite it in my paper. I use it on a custom dataset of trees, with only one class and 3 keypoints. So I had to change a couple of things to make it work with 3 keypoints instead of 17. The problem is that COCOEvaluator and cocoanalyze.eval give me totally different results.

When I evaluate it with COCOEvaluator, i get this:

[05/01 11:37:51 d2.evaluation.fast_eval_api]: Evaluate annotation type *keypoints*
[05/01 11:37:52 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 0.68 seconds.
[05/01 11:37:52 d2.evaluation.fast_eval_api]: Accumulating evaluation results...
[05/01 11:37:52 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 0.01 seconds.
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.789
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.832
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.800
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.684
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.932
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.834
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.872
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.842
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.749
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.965

Then I use the .json file generated by COCOEvaluator and evaluate with coco_analyze.evaluate

<mrr:2.0>Running per image evaluation...
<mrr:2.0>Evaluate annotation type *keypoints*
<mrr:2.0>DONE (t=1.73s).
<mrr:2.0>Accumulating evaluation results...
<mrr:2.0>DONE (t=0.05s).
<mrr:2.0>Verbose Summary:
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.387
 Average Precision  (AP) @[ IoU=0.55      | area=   all | maxDets= 20 ] = 0.324
 Average Precision  (AP) @[ IoU=0.60      | area=   all | maxDets= 20 ] = 0.245
 Average Precision  (AP) @[ IoU=0.65      | area=   all | maxDets= 20 ] = 0.172
 Average Precision  (AP) @[ IoU=0.70      | area=   all | maxDets= 20 ] = 0.111
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.062
 Average Precision  (AP) @[ IoU=0.80      | area=   all | maxDets= 20 ] = 0.032
 Average Precision  (AP) @[ IoU=0.85      | area=   all | maxDets= 20 ] = 0.011
 Average Precision  (AP) @[ IoU=0.90      | area=   all | maxDets= 20 ] = 0.002
 Average Precision  (AP) @[ IoU=0.95      | area=   all | maxDets= 20 ] = 0.000
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.135
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.119
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.171
 Average Recall     (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.536
 Average Recall     (AR) @[ IoU=0.55      | area=   all | maxDets= 20 ] = 0.486
 Average Recall     (AR) @[ IoU=0.60      | area=   all | maxDets= 20 ] = 0.425
 Average Recall     (AR) @[ IoU=0.65      | area=   all | maxDets= 20 ] = 0.355
 Average Recall     (AR) @[ IoU=0.70      | area=   all | maxDets= 20 ] = 0.285
 Average Recall     (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.213
 Average Recall     (AR) @[ IoU=0.80      | area=   all | maxDets= 20 ] = 0.149
 Average Recall     (AR) @[ IoU=0.85      | area=   all | maxDets= 20 ] = 0.092
 Average Recall     (AR) @[ IoU=0.90      | area=   all | maxDets= 20 ] = 0.044
 Average Recall     (AR) @[ IoU=0.95      | area=   all | maxDets= 20 ] = 0.009
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.259
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.172
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.347

When I run

print(len(coco_analyze._gts))
print(len(coco_analyze._dts))
false_pos_dts = coco_analyze.false_pos_dts
false_neg_gts = coco_analyze.false_neg_gts
for oks in coco_analyze.params.oksThrs:
    print("Oks:[%.2f] - Num.FP:[%d] - Num.FN:[%d]"%(oks,len(false_pos_dts['all',str(oks)]),len(false_neg_gts['all',str(oks)])))

it return me:

_gts=7704
_dts=10256
Oks:[0.50] - Num.FP:[4434] - Num.FN:[1882]
Oks:[0.55] - Num.FP:[4437] - Num.FN:[1885]
Oks:[0.60] - Num.FP:[4441] - Num.FN:[1889]
Oks:[0.65] - Num.FP:[4455] - Num.FN:[1903]
Oks:[0.70] - Num.FP:[4457] - Num.FN:[1905]
Oks:[0.75] - Num.FP:[4460] - Num.FN:[1908]
Oks:[0.80] - Num.FP:[4463] - Num.FN:[1911]
Oks:[0.85] - Num.FP:[5036] - Num.FN:[2484]
Oks:[0.90] - Num.FP:[6801] - Num.FN:[4249]
Oks:[0.95] - Num.FP:[9345] - Num.FN:[6793]

For both, I use `KEYPOINT_OKS_SIGMAS = (.25, .25, .25)

Is there something I am missing?`

ImportError

Hi,
When I run COCOanalyze_demo.ipynb in jupyter notebook, there is an Import Error

---------------------------------------------------------------------------
ImportError                               Traceback (most recent call last)
<ipython-input-4-3e23b1a80e3d> in <module>()
      4 
      5 ## COCO imports
----> 6 from pycocotools.coco import COCO
      7 from pycocotools.cocoeval import COCOeval
      8 from pycocotools.cocoanalyze import COCOanalyze

/home/jeff/coco-analyze/pycocotools/coco.py in <module>()
     53 import copy
     54 import itertools
---> 55 from . import mask as maskUtils
     56 import os
     57 from collections import defaultdict

/home/jeff/coco-analyze/pycocotools/mask.py in <module>()
      1 __author__ = 'tsungyi'
      2 
----> 3 import pycocotools._mask as _mask
      4 
      5 # Interface for manipulating masks stored in RLE format.

ImportError: No module named _mask

regards

OPt Score

Hi @matteorr

Thank you so much for your great analysis,
Would you please explain more about opt.score , It seems confusing for me, would you please clear the steps for that?

Thanks

Key error raised when running code.

Hi, thanks to your great code. However, I've encountered an error when running your code.

...
Analyzing background false positives and false negatives...
mrr:2.0Running per image evaluation...
mrr:2.0Evaluate annotation type keypoints
mrr:2.0DONE (t=3.76s).
DONE (t=4.53s).
/Users/gxy/coco/PythonAPI/analysisAPI/backgroundFalseNegErrors.py:215: MatplotlibDeprecationWarning: The set_axis_bgcolor function was deprecated in version 2.0. Use set_facecolor instead.
ax.set_axis_bgcolor('lightgray')
/Users/gxy/coco/PythonAPI/analysisAPI/backgroundFalseNegErrors.py:226: MatplotlibDeprecationWarning: The set_axis_bgcolor function was deprecated in version 2.0. Use set_facecolor instead.
ax.set_axis_bgcolor('lightgray')
Traceback (most recent call last):
File "/Users/gxy/coco/PythonAPI/run_analysis.py", line 138, in
main()
File "/Users/gxy/coco/PythonAPI/run_analysis.py", line 126, in main
paths = occlusionAndCrowdingSensitivity( coco_analyze, .75, saveDir )
File "/Users/gxy/coco/PythonAPI/analysisAPI/occlusionAndCrowdingSensitivity.py", line 49, in occlusionAndCrowdingSensitivity
total_gts += overlap_index[no]
KeyError: 7
Process finished with exit code 1

Could you please help me?

coco_url no longer work

The coco_url of images have changed, format http://mscoco.org/images/IMAGEID doesn't work anymore.
The new annotations file are updated on http://cocodataset.org/#download. In the new annotations, the coco_url are changed.
I think the annotation file in this repository and the url format in the analysisAPI should be updated.

Undetected keypoint confused with the point of (0, 0)

Hi @matteorr, thanks for your great work. I would like to ask a question that is there a way to handle the case that undetected keypoint assigned to (0, 0) is considered as the prediction.
I obtained some analyzed results as follows. We can see each type of error has such a case. I think it may generate incorrect analysis. Thank you.
Inversion
screenshot from 2018-07-24 17-47-07
Jitter
screenshot from 2018-07-24 17-49-17
Miss
screenshot from 2018-07-24 17-49-49
Swap
screenshot from 2018-07-24 17-50-12

Error analysis for object detection

Hi,
Thanks for your great repository.

I have got a question. I would like to have the same error analysis for object detection task. Is there a way to have the same output for detection annotation with your implementation?

Appreciating you in advance for any responses.

Special characters in team name

Hello,

First, thanks for sharing your great code with the community.

My issue: If I put some specific characters (like '_') in the team name, the generated tex won't compile with a "Miss inserted $" error. Solution could be to parse the team name for such specific characters and generate their latex counterparts ("_" in this case).

Thank you. Doms.

ValueError: cannot convert float NaN to integer

Hi, thanks for sharing code.
I got the below error in later stages of analysis. It seems there is a problem with matplotlib, however I can't solve it. Is it a problem with my result file or the api itself?
Thanks!

Analyzing keypoint errors...
<mrr:2.0>Running per image evaluation...
<mrr:2.0>Evaluate annotation type *keypoints*
<mrr:2.0>DONE (t=1.70s).
DONE (t=5.07s).
/home/mkocabas/coco-analyze/analysisAPI/localizationErrors.py:136: RuntimeWarning: invalid value encountered in double_scalars
  ERRORS.append(tot_errs/float(sum(err_vecs[j])))
Traceback (most recent call last):
  File "run_analysis.py", line 127, in <module>
    main()
  File "run_analysis.py", line 99, in main
    paths = localizationErrors( coco_analyze, imgs_info, saveDir )
  File "/home/mkocabas/coco-analyze/analysisAPI/localizationErrors.py", line 146, in localizationErrors
    patches, autotexts = ax1.pie( ERRORS, colors=colors)
  File "/usr/lib/python2.7/dist-packages/matplotlib/__init__.py", line 1814, in inner
    return func(ax, *args, **kwargs)
  File "/usr/lib/python2.7/dist-packages/matplotlib/axes/_axes.py", line 2600, in pie
    **wedgeprops)
  File "/usr/lib/python2.7/dist-packages/matplotlib/patches.py", line 1008, in __init__
    self._recompute_path()
  File "/usr/lib/python2.7/dist-packages/matplotlib/patches.py", line 1020, in _recompute_path
    arc = Path.arc(theta1, theta2)
  File "/usr/lib/python2.7/dist-packages/matplotlib/path.py", line 861, in arc
    n = int(2 ** np.ceil((eta2 - eta1) / halfpi))
ValueError: cannot convert float NaN to integer

Missing figures within the generated latex file

Hello,

Another issue from my side. Each time, one figure contains only empty filenames (example below). Even running your fake example. To give you some context, at the very beginning of the script, I have an error "failed to get the current screen resources". I use the script on a remote computer using x2go that may explain this error and may explain also the missing images (??)

If I can help to test something, please let me know.

Thank you. Doms.

\begin{figure}[h!]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{cc|cc}
\hline\hline
\textbf{High Score - Low OKS} & \textbf{Low Score - High OKS} & \textbf{High Score - Low OKS} & \textbf{Low Score - High OKS}\
\hline
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{}\
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{}\
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{}\
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{}\
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{}\
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{} &
\includegraphics[width=.2\linewidth,height=.15\paperwidth,keepaspectratio]{}\
\end{tabular}
}
\vspace{-3mm}
\caption{ {\small \textbf{Top Scoring Errors.} Scoring errors ordered by relevance top to bottom and left to right.
Each scoring error consists of a ground-truth annotation and a pair of detections shown side by side, one with high score and low OKS (left),
and one with low score and high OKS (right).
The relevance is computed as the geometric mean between the difference of the OKS obtained
by the two detections and the difference of their confidence score.
The ground truth skeleton is in green, and the color coding of the detection skeleton is described in Sec. 1.
Each image title contains the following information:
[detection\textunderscore score, OKS, image\textunderscore id, ground\textunderscore truth\textunderscore id, detection\textunderscore id].}}
\end{figure}

Repositioning Code

In the paper it is said that the localization errors are repositioned. Where in the code may I find the repositioning method?

About Histogram Plot of False Positive Scores

In histogram plot of false positive scores along with different percentile, we can see that for lower percentile false positives scores are high. In the generated text file for each percentile there are a score and number of detection. i.e. Percentiles of the scores of all Detections:

  • 20th perc. score:[0.004]; num. dts:[1802]
  • 40th perc. score:[0.154]; num. dts:[3603]
  • 60th perc. score:[0.378]; num. dts:[5404]
  • 80th perc. score:[0.554]; num. dts:[7205]

Here in X-axis mean percentile, so why in X-axis values are not evenly distributed?
What does the Y-axis value mean? And how do you get the value?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.