Code Monkey home page Code Monkey logo

Comments (2)

matteorr avatar matteorr commented on June 26, 2024 1

Thanks for your question!

If you go to the coco-analyze project page on my website, you can download the json files containing the annotation ids for all instances in each of the benchmarks described in the paper.

Occlusion and crowding benchmarks:

import json
with open('./coco_train2014_occlusion_crowding_benchmarks.json','r') as fp: 
    occl_benchmarks = json.load(fp)

# this will print the criteria for defining each benchmark and the number of instances it contains
# n_v: number of visible keypoints
# n_o: number of overlaps with IoU > 0.1
# n_i: number of instances in that benchmark
for bi in occl_benchmarks: 
    print("n_v: {}, n_o:{}, n_i:{}".format(bi['num_visible_keypoints'], bi['num_overlaps'], len(bi['gtIds']))

Size benchmarks:

import json
with open('./coco_train2014_size_benchmarks.json','r') as fp: 
    size_benchmarks = json.load(fp)

# this will print the criteria for defining each benchmark and the number of instances it contains
# a_l: area range label
# a_s: area range size in pixels
# n_i: number of instances in that benchmark
for bi in size_benchmarks: 
    print("a_l: {}, a_s:{}, n_i:{}".format(bi['areaRngLbl'], bi['areaRng'], len(bi['gtIds'])))

Now that you have the id for each ground truth annotation you can recover the image id using the standard coco api:

# ... load the coco annotations ...
b_i = 0 # choose the benchmark index you want, for occlusion b_i is 0 to 11 for size b_i is 0 to 3
anns = coco_kps.loadAnns(size_benchmarks[b_i]['gtIds'])
image_ids = [a['image_id'] for a in anns]

Note that some images will contain instances from multiple benchmarks, i.e. on one side of the image there is a person completely visible and with no overlapping people, while on the other side of the image there is a group of 3 people overlapping and with some keypoints not visible.

Hope this is clear, so I'm closing the issue, but feel free to reopen or comment.

Only caveat is that these are for the COCO 2014 training set. If you want the benchmarks for the latest COCO training set release (done in 2017), you can simply apply the same principle to divide annotations into the different benchmarks they belong to. I might do this in the next week and add it to this repo if you find it useful.

from coco-analyze.

AbrarZShahriar avatar AbrarZShahriar commented on June 26, 2024

That would be great, thanks.

from coco-analyze.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.