cloud-cv / evalai-starters Goto Github PK
View Code? Open in Web Editor NEWHow to create a challenge on EvalAI?
How to create a challenge on EvalAI?
Hi,
I am getting the following errors when I update the challenge_config.yaml
after using the given template initially. I understand the reasons behind some of these checks but at the same time, it is quite confusing that one can't change things even during the challenge development phase. It seems like a hard-requirement for the host to update the template in one go on the challenge
branch otherwise these errors would start to pop up.
Following errors occurred while validating the challenge config:
ERROR: Challenge phase 2 not found in config. Deletion of existing challenge phase after challenge creation is not allowed.
ERROR: Dataset split 3 doesn't exist. Addition of a new dataset split after challenge creation is not allowed.
ERROR: Challenge phase split (leaderboard_id: 1, challenge_phase_id: 1, dataset_split_id: 2) doesn't exist. Addition of challenge phase split after challenge creation is not allowed.
ERROR: Challenge phase split (leaderboard_id: 1, challenge_phase_id: 1, dataset_split_id: 3) doesn't exist. Addition of challenge phase split after challenge creation is not allowed.
ERROR: Challenge phase split (leaderboard_id: 1, challenge_phase_id: 2, dataset_split_id: 2) not found in config. Deletion of existing challenge phase split after challenge creation is not allowed.
ERROR: Challenge phase split (leaderboard_id: 1, challenge_phase_id: 2, dataset_split_id: 1) not found in config. Deletion of existing challenge phase split after challenge creation is not allowed.
There was an error while creating an issue: 404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/repos/repos#get-a-repository"}
Exiting the challenge_processing_script.py script after failure
I am trying to create a new challenge. Where should I put the test annotation file?
Any help would be highly appreciated.
Hi, I'm having difficulty retrieving a docker image that I submitted. I'm following the remote challenge evaluation script, and I am able to retrieve the submission from the queue which contains a link to the following:
{"submitted_image_uri": "937891341272.dkr.ecr.us-east-1.amazonaws.com/avalon-challenge-1882-participant-team-17311:316d69a7-cf5d-46a9-b631-06b705cf2603"}
However, when I access the submitted_image_uri, I get a "Not Authorized" error. What authorization do I need to do to access this?
I got following error "500 Server Error: Internal Server Error for url: https://eval.ai/api/challenges/challenge/challenge_host_team/2785/validate_challenge_config/". What could be the possible solution for it?
Hello, I want to hold a remote challenge evaluation.
When I read the README.md file in remote_challenge_evaluation
fold, it says:
"4. Create a new virtual python3 environment for installating the worker requirements."
But I don't know where to create a new virtual env for our evaluation, more infomation is needed.
Thanks!
I encountered the following error when running ./run.sh
zip warning: first full name: evaluation_script/__init__.py
second full name: evaluation_script/pycocotools/__init__.py
name in zip file repeated: __init__.py
this may be a result of using -j
zip error: Invalid command arguments (cannot repeat names in zip file)
Is -j
necessary? If so, how can we fix it if we have multiple __init__.py
under different sub-directories? Thanks!
Hello,
we are trying to host a challenge which would compare user's submissions with ground truth image data.
We have roughly 10GB of such ground truth data which needs to sit on the evaluation server.
It is not obvious where to put these files, I assume that pushing those into the github repository is not the option here?
Thanks!
Issue:
The challenge config here:
EvalAI-Starters/challenge_config.yaml
Line 17 in 8338085
tags
and domain
configuration available.Trying to run a simple challenge evaluation server using the create challenge using Github method.
Getting the following error once I push to challenge branch, with changes made to the github/host_config.json
file.
Run actions/setup-python@v2
Version 3.7.4 was not found in the local cache
Error: Version 3.7.4 with arch x[6](xxx#step:3:7)4 not found
The list of all available versions can be found here: https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json
I have the following html code to insert an image in description.html. The image is in the same directory as description.html. But the image does not show up in the evalai website of the competition.
<img width="100%" src="example.png" alt="Example">
In utils.py
in two instances to fetch the repo the calls are chained as
def add_pull_request_comment(github_auth_token, repo_name, pr_number, comment_body):
....
client = Github(github_auth_token)
repo = client.get_user().get_repo(repo_name)
This does not allow for the repo to be owned by an organization or by a person different by the one owning the AUTH_TOKEN.
I am holding a remote evaluation challenge with EvalAI.
I can succcessfully download the submissions from the website.
However, when I need to update the leaderboard with update_finished
, I meet error in update_submission_data.
I found that when goes through it goes wrong in function make_request
and response.raise_for_status()
. Specifically:
response = requests.request(
method=method, url=url, headers=headers, data=data
)
It reports that requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://eval.ai/api/jobs/challenge/2253/update_submission/
and the content in the website is {"detail":"Authentication credentials were not provided."}
How can I fix it?
I want to hold a remote challenge evaluation, and when I read the README.md file in remote_challenge_evaluation fold, it says:
"After receiving the details from the admin, please add these in the evaluation_script_starter.py"
But I can not find where is evaluation_script_starter.py
, and I have no idea what to add.
I'm trying to add this package as a requirement for evaluation. I uncommented the code here that calls pip install <package>
but I get the following error on the worker log when I try and submit. Are there some packages that can't be installed this way? Other packages install fine (e.g., shapely-1.7.1
from the example script)
Collecting pyxdameraulevenshtein
Downloading https://files.pythonhosted.org/packages/aa/e8/53d212009d6d40fdd98ef41585e5442812323d145aa47f507996093567f2/pyxDamerauLevenshtein-1.7.0.tar.gz
Installing build dependencies: started
[2022-02-03 18:44:29] ERROR WORKER_LOG Exception raised while creating Python module for challenge_id: 1542
Traceback (most recent call last):
File "/code/scripts/workers/submission_worker.py", line 321, in extract_challenge_data
CHALLENGE_IMPORT_STRING.format(challenge_id=challenge.id)
File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/tmp/tmpclumagos/compute/challenge_data/challenge_1542/__init__.py", line 73, in <module>
install("pyxdameraulevenshtein")
File "/tmp/tmpclumagos/compute/challenge_data/challenge_1542/__init__.py", line 54, in install
subprocess.check_call([sys.executable, "-m", "pip", "install", package])
File "/usr/local/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/python', '-m', 'pip', 'install', 'pyxdameraulevenshtein']' died with <Signals.SIGKILL: 9>.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.