Code Monkey home page Code Monkey logo

vipriors-challenges-toolkit's Introduction

VIPriors challenges toolkit

Collection of tools to support submissions to the 4th VIPriors workshop challenges.

Challenges

Please find the toolkit for each challenge here:

FAQ

All challenges

Q: Can we use any other data than the data provdided?
A: No.

Q: Is it allowed to train our model on the validation data?
A: Yes.

Q: Is data augmentation allowed?
A: Yes, as long as the augmentations are only applied to the provided data.

Object detection

Q: Can we use the semantic segmentation labels?
A: No.

vipriors-challenges-toolkit's People

Contributors

attila94 avatar bapmbr avatar dzambrano avatar oskyhn avatar rjbruin avatar truong11062002 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

vipriors-challenges-toolkit's Issues

no pre-training weights or additional data sets can be used

Dear organizers,
Hello, I took part in the action recognition track. I see the competition policy says that no pre-training weights or additional data sets can be used. I can guarantee that I will not use it, but I cannot guarantee others. So I wonder if you have any good way to tell if the challengers are using something they shouldn't use?
What should challengers submit in the end?
Yours
Marco

Submission error in development phase for ReID challenge

Hi, I got an error after submitting and running in the development phase.

Here is the error output:

'''
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
/opt/conda/lib/python3.7/site-packages/numpy/core/fromnumeric.py:3257: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/opt/conda/lib/python3.7/site-packages/numpy/core/_methods.py:161: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)

'''

and the mAP outputs 'nan'.

I am wondering if this is caused by the submitted file or the platform?

BTW, the .csv file seems alright with 51 rows and 911 cols which generated using code from provided baseline.py.

Best

Can I share my code with *.so in peer review?

For some reason of the privacy, my code may be not allowed to fully shared. So can I use Cython to compile some of them into *.so. I promise the results can be reproducted without violance of the rules.

unfair competition and terrible experience!we need more compelling  reply

@rjbruin @Attila94 Suddenly we received a email“extend the deadline until Wednesday, September 29th, 2021, 23:59 UTC”, originally deadline is Sept. 24th 2021, 11 p.m. UTC, we received the email at Sept. 26th 2021, 10:10 a.m. UTC, there are two day intervals. For the person who want to delay the deadline, they have enough time to train and improve the model, but most of us didn’t know the deadline will be delayed, we did nothing these two days, but they can, I think this is a cheating time.

And latter you send email again,the deadline updated “Sept 28, 01:59 AM UTC”, I think this is a more terrible decision, most of us who received the last email at 26th 2021, 10:10 a.m. UTC and know the new deadline, we have some time to improve our model, but you change the deadline again, for the competitors who received the last email at 26th 2021, 10:10 a.m. UTC and know the new deadline is “September 29th, 2021, 23:59 UTC” are unfair, they don’t have enough time to train new model, but for the person who want to delay the deadline, they not only can improve their model during Sept. 24th 2021, 11 p.m. UTC to Sept. 26th 2021, 10:10 a.m. UTC, but also can prevent other competitors to improve their model because of the time limit. I think you should consider seriously whether it is fair or not, because most of us known the deadline is Sept. 24th 2021, 11 p.m. UTC, it is clear on homepage, just few competitors, maybe they hidden their score results on purpose, so finally maybe forget to submit the result to board, I think this their own reasons, they do this just want to win the competition  maliciously,because they know other competitors‘ result, but other competitors do not know their result. For a study competition, this is a bad direction, I think the purpose that you hold this competition is not this. But you changed the deadline again and again which aggravates the bad direction.

By the way, I see the results in Instance Segmentation, the 2nd and 3rd have same score result for every evaluation index, and for 2nd he just submit only once then get a high score same with 3rd, so I doubt they might have cheated,and I think they are new competitors after the deadline 24th 2021, 11 p.m. UTC, so these two results are valid ?
截屏2021-09-28 上午11 30 03

Submission Error occured in instance segmentation, can you fix it?

@Attila94 @rjbruin
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Traceback (most recent call last):
File "/tmp/codalab/tmpZy6Ah4/run/program/evaluate.py", line 82, in
evaluate_from_files(args[0], args[1], args[2])
File "/tmp/codalab/tmpZy6Ah4/run/program/evaluate.py", line 45, in evaluate_from_files
with open(groundtruths_filepath, 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/codalab/tmpZy6Ah4/run/input/ref/groundtruths.json'

And also I did not submit anything in 09/15/2021 12:57, but there are some failed submissions, I am confused, the details are attached. help me, thanks.
截屏2021-09-15 下午10 03 35

wo questions about this competition:

Dear organizers,
Hello, I finally need to consult you with two questions about this competition:
1.
When our team saw that the results could be hidden from the list, all the results of our team were hidden from the list, and now the final ranking does not have our results, we want to know what the final ranking will prevail?
two。.
Now the top 76 points on the list, we have good reason to suspect that they have used pre-training weights or other data sets. After all, the most advanced method scores only 84.5 on the complete K400 data set. How can anyone score 76 on the processed small sample K400?
We are anxious to get your reply.
Yours
Marco

Action Recognition Challenge Sever

I used the chance.py you provided to randomly generate the .txt file and compress it to .zip, but the commit on sever failed. Why is this?
image
image

Question about the poster session

Hi organizers,
I have noticed that all accepted works will be presented on ICCV gatherly in the poster session. Do the winning teams of the challenge need to provide posters?

submitting failed: no space left on device

Will this failed submission reduce our submission times?

docker: failed to register layer: Error processing tar file(exit status 1): write /opt/conda/lib/libmkl_avx512_mic.so: no space left on device.
See 'docker run --help'.

Questions about Incorrectly labeled data in Object Detection Challenge

Hello, I am interested in 2021 VIPriors Object Detection Challenge. However, I found that there are some problems with the annotations in the training data. For example, some bounding boxes of class 'steer' are incorrectly labeled in the position of the class 'front_pedal'.

Can we choose to abandon this part of the wrong labeling information? Thanks.
@rjbruin @oskyhn

Kinetics400ViPriors Download extremely slow

Hi @rjbruin, I am a student from China and I am very interested in the 2021 VIPriors Action Recognition Challenge. But I am suffering from an extremely slow download speed (10kb/s) when I am trying to download the datasets from the SURF Drive. I would really appreciate it if you can provide another download link such as Google Cloud or BaiDu Cloud , Thanks a lot!

Image Classification -- Question about the Datatset

Dear Professor:

We have a queation about the dataset in training stage.

  1. We are not allowed to train our model on original ImageNet-2012 validation data.
  2. Can we train our model on the generated training and validation data?

Are the above two understandings right?

Thanks for your time!

Certificate of competition

Is there a certificate for this competition?
How many of the top finishers in each challenges will receive a certificate?

Final submit issue

Dear organizeer I have a few questions. 1. Do our final result have to be on leaderboard? 2. As described in https://vipriors.github.io/ All deadlines are 23:59 GMT. But the CodaLab phase ended 22:59 GMT. Due to this two question our best result do not apear on the leaderboard, is this matter and is there anything you or I can do?

Submission not on leaderboard

I made two submissons of the Instance seg challenge, but I don't find myself on leaderboard. Is there something that I did wrong or something else?

Technical report issue

I'm a participant of the action recognition track, I have a few ment for the tec report. 1. Is there any template for technical report? 2. For verification, the ddl is October 1st 23:59 GMT, and we can send the report directly to [email protected]?

for action recognition dataset

Can you direct provide kinetics400ViPriors-val.csv/kinetics400ViPriors-train.csv/kinetics400ViPriors-test.csv for us?

GAN-based Annotation with other data

I know that using a pre-trained network with additional data is not allowed for training the detection model.
But I want to specify whether we can use GAN models that have been pre-trained with other than challenge data for only data augmentation?
Thanks.

Questions about the technical report.

Hello,

I am interested in VIPriors challenge. I am curious to know if the technical report allowed to be submitted to other conferences or journals?

Looking forward to your reply.

Best wishes!

Unfair and extremely bad racing experience

First of all, the reason for reopening the competition is unacceptable. Everyone can see the deadline of the competition on the competition page. Regarding the issue of the time difference between https://vipriors.github.io/ and CodaLab, no contestant raised objections to the organizer. I think everyone should use the time on CodaLab as the deadline, or accept different deadlines, and everyone will use the time on CodaLab as the deadline because it ends sooner.

And most players submit their scores a few hours or days before the deadline, but some players choose not to make it public. In my opinion, the main reason is that these players did not disclose their results before the deadline, so why reopen the competition submission? Isn’t it enough to provide them with a way to submit their results to board? Therefore, it is not necessary to reopen the competition submission channel.

24th leadboard

28_leadboard

Secondly, we discuss the fairness of the competition. In the object detection competition, scores barely improved in the days leading up to the deadline. And now the 2th player improved his scores to second place in just 9 hours, which is very incredible, and the 2th player only ranked 18th on the 24th leadboard. I suspect that he retrained a better model in a few days from the 25th to the 27th or adopts other methods. Is it fair to submit the model results trained from the 25th to the 27th? And most of contestants think that the competition is over. Is there some contestants who think they can resubmit their results through feedback to the organizer, so as to improve the performance of their models between the 25th and the 27th?

Finally, I recommend using the initial leadboard as the final result of the competition. There are too many unfair factors in it, and using the initial leadboard is fair to all contestants.

There are also the same unfair problems in other competition, such as Instance Segmentation.

No updates can be seen in 2022

Hi, I would like to ask whether the data would be updated or not. Are we going to use the same data in last year? The website on Codalab was also not updated yet. Where shell we sign in for the challenge? Besides, we also want to ask the final evaluation criterion is [email protected]:0.95 just like coco or criterion in the DelftBikes paper HALLUCINATION IN OBJECT DETECTION — A STUDY IN VISUAL PART VERIFICATION? Thankyou.

Question about technical report ddl

Hi, organizers:
In order to avoid disputes, please clarify the DDL of the technical report. Is it 23:59 UTC, October 1st or other time? Is this the ddl for sending emails or the ddl for uploading arxiv files?

enquiry about VIPriors Object Detection challenge

I am sorry that I am very confused about your instruction about setting up dataset.
You say there are 3 json files, i.e, vipriors-object-detection-train.json, vipriors-object-detection-val.json, and vipriors-object-detection-test.json, in your given annotations folder.
image

However, in arrange_images.py, it only gives me a clear explanation about training dataset. I do not know what is going on with test-images, for evaluation during training or for generating submission results?
Besides, I do not know how to do with vipriors-object-detection-val.json and vipriors-object-detection-test.json.
image

could you explain to me about which group of images is used for generating submission results, which one is for test and evaluation during training?

Many thanks.

Query about The Re-submission

The re-submission site should be open for those who did not submit their final results due to the unclarities about the submission deadline (only one hour deviation). Those who had submitted their results in Sept. 24th should not re-submitted again, since there is just one submission opportunity one day. Otherwise, this may provide a cheating chance for someone.

I want to take an example in the track of "image classification". The team "Wprofessor" had submitted his result in Sept. 24th, and the top-1 accuracy is only about 72%. He had already used his opportunity for submission in the deadline of Sept. 24th. Due to the extension of deadline to Sept. 28th which is provided for those who did not submit their results in Sept. 24th, "Wprofessor" re-submitted again in Sept. 27th and increased his result from 72% to 75% by a large margin in only a few days (This track is of high resource-consuming which requires hundreds of GPU-day at least). This is ridiculous, unfair and unacceptable.

1

2

Occluded labels

Hello,

I am interested in 2021 VIPriors Object Detection Challenge.

I have a question, there are four possible states(intact, damaged, absent and occluded). In you paper, the missing parts(absent, occluded) are not used during training nor testing, but the occluded is used during training in your code?
bike_dataset.py

for ind,i in enumerate(labels['parts'],0):
      lab = labels['parts'][i]
      if  lab['object_state'] != 'absent':
          loc = lab['absolute_bounding_box']
          xmin = loc['left']
          xmax = loc['left'] + loc['width']
          ymin = loc['top']
          ymax = loc['top'] + loc['height']
          boxes.append([xmin, ymin, xmax, ymax])
          labs.append(ind+1)

Are the occluded labels used in the test set?

instance seg, submitting failed: docker: write /var/lib/docker/tmp/GetImageBlob487409347: no space left on device.

hi @rjbruin I submit results to val and test set, get the same error like this, but I can get the val set result offline by toolkit you provided. help me, thanks

Unable to find image 'rjbruin/vipriors-object-detection-evaluation:1.1' locally
1.1: Pulling from rjbruin/vipriors-object-detection-evaluation
4ae16bd47783: Pulling fs layer
46d3909aefd6: Pulling fs layer
b8fd74fabe0f: Pulling fs layer
836d375308d1: Pulling fs layer
338996cb2698: Pulling fs layer
f98d8d5f66de: Pulling fs layer
7d209ed0cc7b: Pulling fs layer
190e5781d107: Pulling fs layer
0499accd4745: Pulling fs layer
d9aa9a503246: Pulling fs layer
77f2dbd77c97: Pulling fs layer
7d209ed0cc7b: Waiting
190e5781d107: Waiting
0499accd4745: Waiting
d9aa9a503246: Waiting
77f2dbd77c97: Waiting
836d375308d1: Waiting
338996cb2698: Waiting
f98d8d5f66de: Waiting
46d3909aefd6: Download complete
4ae16bd47783: Verifying Checksum
4ae16bd47783: Download complete
836d375308d1: Verifying Checksum
836d375308d1: Download complete
f98d8d5f66de: Verifying Checksum
f98d8d5f66de: Download complete
4ae16bd47783: Pull complete
338996cb2698: Verifying Checksum
338996cb2698: Download complete
190e5781d107: Verifying Checksum
190e5781d107: Download complete
7d209ed0cc7b: Verifying Checksum
7d209ed0cc7b: Download complete
0499accd4745: Verifying Checksum
0499accd4745: Download complete
d9aa9a503246: Verifying Checksum
d9aa9a503246: Download complete
docker: write /var/lib/docker/tmp/GetImageBlob487409347: no space left on device.
See 'docker run --help'.

Test data

Hello, does the action recognition track allow the use of test data pseudo-labeling? Or unsupervised pre-training using test data?

Imagenet dataset too large to download

Running the generate_images.py requires one to download the full Imagenet 2012 dataset, which is quite large. Is there any chance a drive link can be provided which contains the subset of the dataset with which we are expected to work with?

Action Recognition - Corrupted Videos [SOLVED]

1、The download data is damaged, is it a download problem or a data problem?
2、Submission results require groundruths.txt, but github only gives the format of submission.txt, without mentioning groundruths.txt.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.