Code Monkey home page Code Monkey logo

visda-2018-public's People

Contributors

britefury avatar minner avatar neelakaushik avatar xcpeng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

visda-2018-public's Issues

Use of COCO training data

In the rules the COCO (target) training data is not mentioned anywhere. Are we to allowed to train our domain adaptation algorithm on it? Without using the labels of course. Or is it to compare the domain adaptation model to a model trained on the target training dataset?

Invalid images in detection track test set

I have started working with the detection track test data and I have discovered that the following images cannot be read due to having 0 bytes:

1cfb9a9ecd9f7e4c4c1b9eefb355c58d04741521.jpg
3839e4552f8dc074debcae3bb23e3d344fadc2f5.jpg
6720e49dafc58027896c079dcf31c65e623c526c.jpg
89ee19843ddcaf1bcdb12b19e10ca9245658f8da.jpg
97f07801bfd70c8f5c13e99207eeeec244b39e2a.jpg
b42a52991fc7960823246409930153dc80d1475e.jpg
d0c12a8136c25fbba435a478e618303bb7e52158.jpg
efb57473c96028f06c3bc55648797e4089fe363f.jpg
f41b0275980fee7ed76bbd6eb7c0c4758d7c5616.jpg

As a work around, I can skip them for now and produce empty predictions for them in the predictions file.

My submission's status is submitted for a lone time

I have started working with the detection track. This morning I submitted a results file. However, after 2 hours my submission status is still submitted and no scores are shown. I wonder if it's normal and how long does it usually take for the server to evaluate the results. Thanks!

Detection track test set contains no image_list.txt

The detection tract test data set does not contain or have an associated image_list.txt. As a consequence, we cannot determine which file order our prediction files should use.

As a work around, I generated one using the ls command, hoping that its sorting order is the correct one.

request for README of pytorch-ssd-mmd-coral

Hi, I run the examples/ssd/train_visda.py script after figure out the directory(using visda18 dataset), and I came across several errors as follows:
image

After I run next time(without any change), I got the following error message:
image

When debugging, I found tensor operation buggy in the following code:

xmin = w - boxes[:,2]
xmax = w - boxes[:,0]

Specifically, when tensor pass throught the line above, the actual dimension is of 1d, not 2d.

I'm still puzzled about the dataset folder configuration. Hope the README file of the code coming out soon : )

Detection competition - unable to submit predictions to CodaLab

Hi,

I've been having difficult submitting to the detection competition. It appears that my text file format is incorrect, as I get the following error:

Traceback (most recent call last):
  File "/tmp/codalab/tmpxfH0zT/run/program/evaluate.py", line 438, in 
    detection_evaluation(truth_file, source_file, adaptation_file, output_file)
  File "/tmp/codalab/tmpxfH0zT/run/program/evaluate.py", line 363, in detection_evaluation
    for line in f.read().splitlines()
  File "/tmp/codalab/tmpxfH0zT/run/program/evaluate.py", line 21, in chunks
    raise RuntimeError("Sequence does not split into {k}, because last elem has {elem.index(uniq)} left")
RuntimeError: Sequence does not split into {k}, because last elem has {elem.index(uniq)} left

Would it be possible for you guys to either:

  • provide an example submission file that the server can handle
  • provide code to generate a valid submission file
  • provide the source code of 'evaluate.py' that runs on the server
  • modify the evaluation code so that it reports more helpful error messages

Thanks! :)

Detection ground truths don't match the coco17 validation ground truths

Hi,

I have been taking a look at the ground truths that you've provided for the detection competition. I have noticed that the GT boxes in the val_ground_truth.pkl do not match those in coco17-val.txt; it seems that they have been scaled. A consistent x,y scale factor is used for each image, but I am unable to determine the pattern/algorithm used to compute the scale factors. Would it be possible to replace val_ground_truth.pkl with a file generated by direct conversion?

I have created a pull request (#11) that adds a conversion script (in case you're interested) and also changes the detection README slight in order to clarify the first problem that I had concerning format :)

Where to find the test data for detection track

As stated in the title, I cannot find any link to the test data of detection track. And I notice that the ReadMe file in detection folder has not been updated for several days. Could you give me some help for accessing to the test data of detection track? Thanks.

synthetic

I want to build my dataset for object detection.
which program is suitable to do that. e.g save bounding box positions , randomize objects...

can we submit abstract now?

sorry to miss the ddl.
I have just noticed this:

participants are requested to submit a 2-page abstract directly via email to visda- 
[email protected], **within 1 week of the challenge end.**

can we submit abstract now?

Have not noticed the deadline of submitting the form.

Dear Visda organizers,

 I have not noticed that we need to submit the google form https://goo.gl/forms/G6Kl7OuUVg1BvbRu1  before Aug 25th, I am wonder if i still can submit this form?

Sorry for my carelessness, hope you can give me this chance.

Best Regards,
Qing Lian

Sample submission files

Could you please provide a sample submission file? I am not sure how are you calculating the classification accuracy and writing those accuracy in files.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.