Code Monkey home page Code Monkey logo

cocoapi's People

Contributors

johnqczhang avatar mohomran avatar pdollar avatar ppwwyyxx avatar rbgirshick avatar rodrigob avatar sampepose avatar szagoruyko avatar tylin avatar yassersouri avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cocoapi's Issues

"area" in annotations

Hi,
I'm creating my own dataset but writing the annotations I've found a field called "area" that I didn't understand.
According to my analysis, it doesn't refer to:

  • image area (width x height)
  • bounding box area (width x height)
  • segmentation area

Any idea about how I should calculate it ?
Thanks
Nico

precision metric comes out wrong

Hi,

I am encountering an inaccurate case with the precision metric.

I have created a simple test (files attached):
coco_problem.zip

  • the ground_truth.json has one image and one bbx annotated for category 1
  • the results.json has two bounding boxes, one for a perfect detection (category 1), and the next for a large false alarm (category 2) .
  • I run the following code

from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval

cocoGt=COCO('/ssd2/ground_truth.json')
cocoDt=cocoGt.loadRes('/ssd2/results.json')
cocoEval = COCOeval(cocoGt,cocoDt,'bbox')

cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()

=> it happens that the api returns, precision and recall to be both 1 which is wrong, since in this case recall should be 1 but precision should be much less than 1 since we have a clear false alarm

am I missing something here or is this a bug?

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 1.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 1.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 1.000

invalid numeric argument '/Wno-cpp'

After I run setup.py I encounter this:

cl : Command line error D8021 : invalid numeric argument '/Wno-cpp'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio 14.0\\VC\\BIN\\x86_amd64\\cl.exe' failed with exit status 2

I have all C++ tools installed and have updated my pip and setuptools.

python 3.5.2
Anaconda 64 bit

Compile the PythonAPI on Windows

I'm having problems running the make file on windows. Can it be the c99 flag for the compiler? I can't even run the "extra_compile_args=["-Wno-cpp", "-Wno-unused-function", "-std=c99"]" in the setup.py. When I pass through those args I can see a lot of c specific errors in the console.

Does anyone have any solution to this?

thanks.

What to run after make?

Hey,

Sorry for the lame question, but what do i need to run after the make command for the pythonApi?

Thanks!

python api cannot handle annotation error

It seems that there is something wrong with the annotation of 'COCO_train2014_000000255018.jpg'. I try to use python api 'segToMask' to generate the binary mask of this image, line 368 of coco.py throws an exception. I think we could add some code to avoid this. Maybe we could change line 367 of coco.py to something like this.

rr, cc = polygon(np.array(s[1:N:2]).clip(max = (h - 1)), np.array(s[0:N:2]).clip(max = (w - 1))) # (y, x)

How to install pycocotools==2.0

Hello,

I would like to install pycocotools==2.0 to my conda env.

I tried conda install pycocotools==2.0 but got this error:

Fetching package metadata .............


PackageNotFoundError: Package missing in current linux-64 channels: 
  - pycocotools ==2.0

Close matches found; did you mean one of these?

    pycocotools: pytools

Thank you!

LuaAPI supports parsing large instances_train2014.json

I am wondering if LuaAPI is gonna support parsing large json file or not? Currently parsing instances_train2014.json (354MB) will cause LuaJIT out of memory. Also would like to know if LuaAPI will be as complete as Python or Matlab APIs in the future. Thanks.

get the index

where can I get the index of all datasets like [a.jpg index]???

Unable to import pycocotools from script

I am playing with the pycocotools package and found that I was not able to import pycocotools from a python script, but COULD import it from ipython. For example:

python -c "import pycocotools"

yields

Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named pycocotools

But, doing

ipython
In [1]: import pycocotools

Works fine.

Has anyone seen this problem before? I'm working in a virtual environment.

Error: specified key is not present in this container

Got this error , when runnig getANNO.m

Error using containers.Map/values
The specified key is not present in this container.

Error in CocoApi/CocoApi/makeMultiMap (line 94)
        js=values(keysMap,num2cell(keysAll));

Error in CocoApi (line 73)
        is.imgAnnIdsMap = makeMultiMap(is.imgIds,...

Error in getANNO (line 24)
    coco=CocoApi(annFile);

How do I know which key is not present?

Error: Attempt to reference field of non-structure array

Why I do get an error in reading the json file?

annFile =
data_iter8.json

Loading and preparing annotations... Attempt to reference field of non-structure array.

Error in CocoApi (line 63)
      is.imgIds = [coco.data.images.id]';

Error in getANNO (line 25)
    coco=CocoApi(annFile);
 
>> im=fileread('data_iter8.json')

mask.decode

when I use pycocotools,I receive a error:Process finished with exit code -1073740940 (0xC0000374)(pycaffe in windows),
it's always Abnormal exit at mask.decode,and COCO.showAnn ,But when I used them in ubuntu,it can run normally,so I want to ask can it use in windows???

Readme.md

Can someone please briefly describe about this project ?

instances_minival2014.json and instances_valminusminival.json

I have attempted many times to download instances_minival2014.json and instances_valminusminival.json from the links given by rbgirshick, but the links are all failed .If someone has the two files, can you upload them to the google drive? Thank you very much!

Unable to find vcvarsall.bat

PS C:\Users\Link> cd C:\Users\Link\Desktop\coco-master\PythonAPI
PS C:\Users\Link\Desktop\coco-master\PythonAPI> python setup.py install
Compiling pycocotools/_mask.pyx because it changed.
[1/1] Cythonizing pycocotools/mask.pyx
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.5
creating build\lib.win-amd64-3.5\pycocotools
copying pycocotools\coco.py -> build\lib.win-amd64-3.5\pycocotools
copying pycocotools\cocoeval.py -> build\lib.win-amd64-3.5\pycocotools
copying pycocotools\mask.py -> build\lib.win-amd64-3.5\pycocotools
copying pycocotools_init
.py -> build\lib.win-amd64-3.5\pycocotools
running build_ext
building 'pycocotools._mask' extension
error: Unable to find vcvarsall.bat

Include dependency on Cython

While trying to execute

-For Python, run "make" under coco/PythonAPI

The following error was generated:

python setup.py build_ext --inplace
Traceback (most recent call last):
  File "setup.py", line 2, in <module>
    from Cython.Build import cythonize
ImportError: No module named Cython.Build
make: *** [all] Error 1

pip install Cython
fixes the problem

Seems, like it would be a good idea to include dependency checking in setup.py.

no module named _mask?

I installed the coco library by doing:
python setup.py build_ext --inplace

Anybody run into the below error after importing?

ImportErrorTraceback (most recent call last)
in ()
----> 1 from pycocotools.coco import COCO

/usr/local/lib/python2.7/dist-packages/pycocotools/coco.py in ()
53 import copy
54 import itertools
---> 55 from . import mask as maskUtils
56 import os
57 from collections import defaultdict

/usr/local/lib/python2.7/dist-packages/pycocotools/mask.py in ()
1 author = 'tsungyi'
2
----> 3 import pycocotools._mask as _mask
4
5 # Interface for manipulating masks stored in RLE format.

ImportError: No module named _mask

the function that converts a mask to a polygon

Could you please share the function that converts masks to polygons, which you used to generate the segmentation annotations for COCO dataset? Because I would like to train the deepMask on my dataset, which needs to represent masks by polygons to generate the groundtruth.

Thank you.

Python API requires X session

The API fails over ssh w/o X forwarding

pjreddie@start9:~/data/coco$ python
Python 2.7.8 (default, Oct 18 2014, 12:50:18) 
[GCC 4.9.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from pycocotools.coco import COCO
Unable to init server: Could not connect: Connection refused

Is there a way to import it that this doesn't happen? Most of my workflow happens in a screen session over ssh and getting X forwarding to work is a pain in that env.

Import error

Hi,

I get an import error ImportError: pycocotools/_mask.so: undefined symbol: PyFPE_jbuf when i try to run first block of pyCocoDemo.ipynb

I've already run make and no error occured.

Have you any lead ?

Questions About Segmentation, BBox, and Area while annotating file

Hi,

I am wondering what segmentation, bbox, and area do as the mscoco website does not explain what they do very well.

Another question is how do I know what numbers to put in for these annotations? I have no idea what to put in for bbox, area, and segmentation so help would be much appreciated.

Thank you in advance.

attempt to index global 'torch' (a nil value)

Hi,

I was able to use coco in Python but I have no success with lua following your readme. Can you please tell me what I'm doing wrong? It would be probably some basic thing in configuration, but I'm new to lua and torch enviroment...

lua5.1 LuaAPI/cocoDemo.lua 
lua5.1: ...e/test/torch/install/share/lua/5.1/coco/CocoApi.lua:45: attempt to index global 'torch' (a nil value)
stack traceback:
    ...e/test/torch/install/share/lua/5.1/coco/CocoApi.lua:45: in main chunk
    [C]: in function 'require'
    ...home/test/torch/install/share/lua/5.1/coco/init.lua:11: in main chunk
    [C]: in function 'require'
    LuaAPI/cocoDemo.lua:2: in main chunk
    [C]: ?

I don't receive all the cat_ids that an image has using anns

so I have the following code and I am hoping to get all the categories an image has however it just shows one category for one image even though that image might have multiple annotations:

import numpy as np
import skimage.io as io
import matplotlib.pyplot as plt
import pylab
import sys
from pprint import pprint as p
from time import sleep
import os
sys.path.append('/home/mona/mscoco/coco/PythonAPI')
p(sys.path)
from pycocotools.coco import COCO

pylab.rcParams['figure.figsize'] = (10.0, 8.0)

dataDir='.'
dataType='train2014'
annFile='%s/annotations/instances_%s.json'%(dataDir,dataType)

coco = COCO(annFile)

cats = coco.loadCats(coco.getCatIds())
nms = [cat['name'] for cat in cats]


annFile = '%s/annotations/captions_%s.json'%(dataDir,dataType)
coco_caps=COCO(annFile)


categories = {"person","bicycle","car","motorcycle","airplane","bus","train","truck","boat","traffic light","fire hydrant","stop sign","parking meter","bench","bird","cat","dog","horse","sheep","cow","elephant","bear","zebra","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis", "snowboard","sports ball","kite","baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife","spoon"    ,"bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza","donut","cake","chair","couch","potted plant","bed","dining table","toilet","tv","laptop","mouse","remote","keyboard","cell phone","microwave","oven","toaster","sink","refrigerator","book","clock","vase","scissors","teddy bear","hair drier","toothbrush"}

for category in categories:
    category_path = "/home/mona/mscoco/all_categories_mscoco_caption/"+category
    if not os.path.exists(category_path):
        os.makedirs(category_path)
    image_count = 0 
    catIds = coco.getCatIds(category)
    imgIds = coco.getImgIds(catIds=catIds );
    for imgId in imgIds:
        print("{0}: {1}".format("image id is", imgId))
        if image_count < 1:
            image_count += 1
            img = coco.loadImgs(imgId)[0]
            annIds = coco.getAnnIds(imgId, catIds=catIds, iscrowd = None)
	    anns = coco.loadAnns(annIds)
            print("{0}: {1}".format("length of annotation is", len(anns)))
            for ann in range(len(anns)):
                print("annotation is")
                print("{0}: {1}".format("category_id is", anns[ann]['category_id']))
                print("{0}: {1}".format("id is", anns[ann]['id']))
                print(coco.loadCats(anns[ann]['category_id']))
            coco.showAnns(anns)
	    annIds = coco_caps.getAnnIds(imgId);
            anns = coco_caps.loadAnns(annIds)
            filename = "/home/mona/mscoco/all_categories_mscoco_caption/"+category+'/' +category+'_'+str(imgId) + ".txt"
	    caption_file = open(filename, 'wb')    
	    for i in range(5):
                caption_file.write((anns[i]['caption']) + os.linesep)
	    caption_file.close()
        else:
            break

imgId = 152360
print(type(coco.loadImgs))
print(dir(coco.loadImgs))
img = coco.loadImgs(imgId)[0]
print(img)

Also how can I just load the categories related to one image id? The last few lines of the code gave an error:

Traceback (most recent call last):
  File "try_coco.py", line 65, in <module>
    img = coco.loadImgs(imgId)[0]
  File "/home/mona/mscoco/coco/PythonAPI/pycocotools/coco.py", line 219, in loadImgs
    return [self.imgs[ids]]
KeyError: 152360

Here's the code execution looks like:
image id is: 98304
length of annotation is: 3
annotation is
category_id is: 47
id is: 1511501
[{u'supercategory': u'kitchen', u'id': 47, u'name': u'cup'}]
annotation is
category_id is: 47
id is: 1513367
[{u'supercategory': u'kitchen', u'id': 47, u'name': u'cup'}]
annotation is
category_id is: 47
id is: 2099764
[{u'supercategory': u'kitchen', u'id': 47, u'name': u'cup'}]

We see various categories are shown here however using the API only category cup is captured.
http://mscoco.org/explore/?id=1927052

mask.frPyObjects is leaking memory

Reproduce by calling mask.frPyObjects(ann['segmentation'], img['height'], img['width']) in a loop and watching your machine run out of memory. So I guess the problem is somwhere in frPoly().

classes

I want to know if there is the clothing object class in the coco, I have not found in the website.
Thanks!

CocoUtils.convertPascalGt error

trying to convert pascal format GT to coco json format by invoking CocoUtils.convertPascalGt in Matlab (version: 2014b) command window, got following error:
"Converting PASCAL VOC dataset... Undefined function or variable 'VOCinit'.

Error in CocoUtils.convertPascalGt (line 43)
VOCinit; C=VOCopts.classes'; catsMap=containers.Map(C,1:length(C));"

Not familiar with Matlab. How to overcome this error?

Also, anyone got python script for the same functionality?

Thanks!

ImportError: dynamic module does not define module export function (PyInit__mask)

I am using python 3.5.2 to do some experiments and have installed pythonAPI.
When I import it with "from pycocotools.coco import COCO", I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hjp/deepLearning/cocoTools/coco/PythonAPI/pycocotools/coco.py", line 55, in <module>
from . import mask as maskUtils
File "/home/hjp/deepLearning/cocoTools/coco/PythonAPI/pycocotools/mask.py", line 3, in <module>
import pycocotools._mask as _mask
ImportError: dynamic module does not define module export function (PyInit__mask)

loadImgs fails for numpy dtypes

Making a minor adjustment to the pycocoDemo will demonstrate the issue.

In cell 5 (removing randomness for the sake of simplicity), we have

catIds = coco.getCatIds(catNms=['person','dog','skateboard']);
imgIds = coco.getImgIds(catIds=catIds );
img = coco.loadImgs(imgIds[0])[0]

If we change imgIds to be a numpy array (of any int or uint dtype), the same code fails with 'NoneType' object has no attribute '__getitem__'

catIds = coco.getCatIds(catNms=['person','dog','skateboard']);
imgIds = np.array(coco.getImgIds(catIds=catIds ), dtype=np.int64);
img = coco.loadImgs(imgIds[0])[0]

Beside the difference in datatype, the input is the same for the two cases. Seems like it might be a quick fix but I cannot find exactly where to look.

turn .json to .txt which will use on darknet

Hi, i don't know how to turn the .json file into .txt file .I just want to get the .txt file which stores class number,x ratio,y ratio,width, height. I use the .txt file to train on darknet.Who can help me?

Problem loading coco in Torch

Hi - I've tried to configure coco correctly but when I do "require 'coco'" in the threpl I am getting this error :
"/home/ubuntu/torch/install/share/lua/5.1/trepl/init.lua:363: /home/ubuntu/torch/install/share/lua/5.1/trepl/init.lua:363: /home/ubuntu/torch/install/share/lua/5.1/torch/init.lua:62: bad argument #2 to 'newmetatable' (parent class name or nil expected)
stack traceback:
[C]: in function 'error'
/home/ubuntu/torch/install/share/lua/5.1/trepl/init.lua:363: in function 'require'
[string "_RESULT={require 'coco'}"]:1: in main chunk
[C]: in function 'xpcall'
/home/ubuntu/torch/install/share/lua/5.1/trepl/init.lua:630: in function 'repl'
...untu/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:185: in main chunk
[C]: at 0x00406670"

I am not very familiar with lua and/or torch so sorry if this is a dumb question.

I have been able to get it to compile and load on OSX - just not on this instance of ubuntu - which is from the "official" torch AMI on EC2.

Trying To Create Annotation; Wondering If This Was Correct Format

Hello everybody

I am trying to create a separate annotation for my own images to use with facebook/deepmask and I was wondering if this was the correct format.

Here is my annotation.json file:

{
"info": {"description": "Grape Vine Pictures", "url": "deeplearningwithtorch.blogspot.com", "version": "1.0", "year": 2016, "contributor": "Stephen Kim", "date_created": "2016-12-15 09:11:52.357475"}, 
"images": [{"license": 0, "file_name": "COCO_train2014_000000581940.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581940}, 
		   {"license": 0, "file_name": "COCO_train2014_000000581941.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581941},
		   {"license": 0, "file_name": "COCO_train2014_000000581942.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581942},
		   {"license": 0, "file_name": "COCO_train2014_000000581943.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581943},
		   {"license": 0, "file_name": "COCO_train2014_000000581944.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581944},
		   {"license": 0, "file_name": "COCO_train2014_000000581945.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581945},
		   {"license": 0, "file_name": "COCO_train2014_000000581946.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581946},
		   {"license": 0, "file_name": "COCO_train2014_000000581947.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581947},
		   {"license": 0, "file_name": "COCO_train2014_000000581948.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581948},
		   {"license": 0, "file_name": "COCO_train2014_000000581949.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581949},
		   {"license": 0, "file_name": "COCO_train2014_000000581950.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581950},
		   {"license": 0, "file_name": "COCO_train2014_000000581951.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581951},
		   {"license": 0, "file_name": "COCO_train2014_000000581952.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581952},
		   {"license": 0, "file_name": "COCO_train2014_000000581953.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581953},
		   {"license": 0, "file_name": "COCO_train2014_000000581954.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581954},
		   {"license": 0, "file_name": "COCO_train2014_000000581955.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581955},
		   {"license": 0, "file_name": "COCO_train2014_000000581956.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581956},
		   {"license": 0, "file_name": "COCO_train2014_000000581957.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581957},
		   {"license": 0, "file_name": "COCO_train2014_000000581958.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581958},
		   {"license": 0, "file_name": "COCO_train2014_000000581959.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581959},
		   {"license": 0, "file_name": "COCO_train2014_000000581960.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581960},
		   {"license": 0, "file_name": "COCO_train2014_000000581961.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581961},
		   {"license": 0, "file_name": "COCO_train2014_000000581962.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581962},
		   {"license": 0, "file_name": "COCO_train2014_000000581963.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581963},
		   {"license": 0, "file_name": "COCO_train2014_000000581964.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581964},
		   {"license": 0, "file_name": "COCO_train2014_000000581965.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581965},
		   {"license": 0, "file_name": "COCO_train2014_000000581966.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581966},
		   {"license": 0, "file_name": "COCO_train2014_000000581967.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581967},
		   {"license": 0, "file_name": "COCO_train2014_000000581968.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581968},
		   {"license": 0, "file_name": "COCO_train2014_000000581969.jpg", "coco_url": "", "height": 640, "width": 480, "date_captured": "", "flickr_url": "", "id": 581969}],
"annotations": [{"id": 1, "image_id": 581940, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 2, "image_id": 581941, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 3, "image_id": 581942, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 4, "image_id": 581943, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 5, "image_id": 581944, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 6, "image_id": 581945, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 7, "image_id": 581946, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 8, "image_id": 581947, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 9, "image_id": 581948, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 10, "image_id": 581949, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 11, "image_id": 581950, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 12, "image_id": 581951, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 13, "image_id": 581952, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 14, "image_id": 581953, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 15, "image_id": 581954, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 16, "image_id": 581955, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 17, "image_id": 581956, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 18, "image_id": 581957, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 19, "image_id": 581958, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 20, "image_id": 581959, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 21, "image_id": 581960, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 22, "image_id": 581961, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 23, "image_id": 581962, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 24, "image_id": 581963, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 25, "image_id": 581964, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 26, "image_id": 581965, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 27, "image_id": 581966, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 28, "image_id": 581967, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 29, "image_id": 581968, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []},
				{"id": 30, "image_id": 581969, "category_id" : 10001, "segmentation": [[]], "area": , "iscrowd": 0, "bbox": []}],
"categories": [{"id" : 10001,"name" : grape, "supercategory" : grape}],
"licenses": []
}

Stephen

README.txt update needed - Images not in coco/images/

The page https://github.com/pdollar/coco says

MS COCO API - http://mscoco.org/
Please download, unzip, and place the images in: coco/images/
Please download and place the annotations in: coco/annotations/

The URL http://mscoco.org/coco/images/ generates the following error:
Not Found
The requested URL /coco/images/ was not found on this server.

It looks like the base URL has changed. For, example the 2015 Testing images is at http://msvocds.blob.core.windows.net/coco2015/test2015.zip

Strange crowd annotations

Hi,

I am experiencing some issues on images with crowd annotations. Sometimes polygons seem to be connected that should not be connected. Below I have created a minimum working example. It shows the code to display the incorrect annotations and I also show the corresponding image. Whereas the round shapes are correct, the entire V shape in the top right should not be there. Is that a known issue?

imgId = 000000002963;
cocoApi = CocoApi('instances_train2014.json');
annIds = cocoApi.getAnnIds('imgIds', imgId, 'iscrowd', []);
anns = cocoApi.loadAnns(annIds);
curSegs = anns(end).segmentation;
M = double(MaskApi.decode(curSegs));
imagesc(M)

crowd-annotations

image

Name 'unicode' is not defined in Python 3

I'm getting this error when I call loadRes() with Python 3.5. Python 3 replaced unicode with str.

coco_results = coco_val.loadRes(results)

This is the stack trace:

/usr/local/lib/python3.5/dist-packages/pycocotools/coco.py in loadRes(self, resFile)
    301         print('Loading and preparing results...')
    302         tic = time.time()
--> 303         if type(resFile) == str or type(resFile) == unicode:
    304             anns = json.load(open(resFile))
    305         elif type(resFile) == np.ndarray:

NameError: name 'unicode' is not defined

RLE to segmentation

Hi there, nice job guys with datasets....
I have one question also regarding closed #23
Im trying to create new json with segmentation field.

It is little bit hard to find out what this segmentation filed means...

I can get RLE object (but it is probably wrong):

boundsTuples = [(x,y), (x1,y1), ...]

img = Image.new('L', (width, height), 0)
ImageDraw.Draw(img).polygon(boundsTuple, outline=1, fill=1)
mask = np.array(img)
fortran = np.asfortranarray(mask)
encoded = encode(fortran)
# encoded now has {counts, size} fileds

how can I convert rle object into segmentation array?

Observations on the calculations of COCO metrics

Hi,

I have some observations on the coco metrics, specially the precision metric that I would like to share.
it would be great if some could clarify these points :) /cc. @pdollar @tylin

for calculating precision/recall, I am calculating the COCO average precision to get a feeling with respect to the systems result. Also, here for better explaining the issue, I will also calculate these metrics considering all the observations as a whole (say as a large stitched image, and not many separate images), which I call here the overall recall/precision.

Case1. a system with perfect detection + one false alarm: in this case as detailed in the next figure, the coco average precision comes out to be 1.0, which is completely ignoring the false alarm's existence!

image

Case2. a system with zero false alarms: in this case, we have no false alarms, and thus, the overall precision is perfect at 1.0; however, the coco precision comes out as 0.5! This case is very important since it could mean that the coco average precision is penalizing systems with no false alarms, and favoring the detection part of a system in evaluation? As you may know systems with zero/small false alarms are of great importance in industrial applications

image

So I am not sure if the above cases are bugs or are intentionally decided for coco, or if I am missing something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.