Code Monkey home page Code Monkey logo

competition_submission_template's Introduction

NeurIPS 2021: MineRL Competition Starter Kit

Discord

This repository is the main MineRL 2021 Competition submission template and starter kit! Compete to solve obtaining diamond now!

This repository contains:

  • Documentation on how to submit your agent to the leaderboard
  • The procedure for Round 1 and Round 2
  • Starter code for you to base your submission!

Other Resources:

Competition overview and tracks

The competition is centered around one goal: obtain diamond in Minecraft from a random starting location without any items. There are two separate tracks with their separate rules and leaderboards. You may participate to the both or only one of them. You need to make different submissions for the two tracks. To choose the track for a submission, see description of the aicrowd.json file below.

  • Intro track uses the MineRLObtainDiamond-v0 environment, which provides original observation and action spaces of the environment. In this track you are free to use any means to reach the diamond, e.g. script the agent (see the baseline solutions), or train the agent or use both! Intro track only has one round (Round 1). No training happens on the AICrowd evaluator side, you only need to worry about the test_submission_code.py file.
  • Research track uses the MineRLObtainDiamondVectorObf-v0 environment, in which both observation and action spaces are obfuscated to prevent manually coding actions (this is also prohibited by the rules). The amount of training is also restricted to 8M samples and four days (see rules). Research track has two rounds (Round 1 and 2).

Competition Procedure - Intro track

In the intro track you will train your agents locally and upload them to AICrowd (via git) to be evaluated by the organizers.

  1. Sign up to join the competition on the AIcrowd website.
  2. Clone this repo and start developing your submissions.
  3. Update aicrowd.json file (team information, track information, etc. See details below).
  4. Train your agents locally, place them under ./train directory, update test_submission_code.py with your agent code and make sure the submission package works correctly with utility/evaluation_locally.sh.
  5. Submit your trained models to AIcrowd Gitlab for evaluation (full instructions below). The automated evaluation setup will evaluate the submissions against the validation environment, to compute and report the metrics on the leaderboard of the competition.

After Round 1 ends, organizers will inspect the code repositories of the top participants to ensure compliance with the competition rules, after which intro track winners are announced.

Competition Procedure - Research track

In the Round 1 of research track you will train your agents locally with a limited number of samples and then upload them to AIcrowd (via git) to be evaluated by the organizers.

  1. Sign up to join the competition on the AIcrowd website.
  2. Clone this repo and start developing your submissions.
  3. Update aicrowd.json file (team information, track information, etc. See details below).
  4. Train your models using the utility/train_locally.sh script (training code must be inside train_submission_code.py file), update test_submission_code.py code as well and make sure the submission package works correctly with utility/evaluation_locally.sh.
  5. Submit your trained models to AIcrowd Gitlab for evaluation (full instructions below). The automated evaluation setup will evaluate the submissions against the validation environment, to compute and report the metrics on the leaderboard of the competition.

Note that you must submit your training code during Round 1 as well! Organizers use this to verify that your training follows the competition rules.

Once Round 1 is complete, the organizers will examine the code repositories of the top submissions on the leaderboard to ensure compliance with the competition rules.

In Round 2 (Research track only), top participants of Round 1 will be invited to submit their submissions, with the evaluator system this time training the agent on the organizer's server before evaluating this. No pre-trained agents are submitted!

How to Submit a Model!

In brief: you define your Python environment using Anaconda environment files, and AICrowd system will build a Docker image and run your code using the docker scripts inside the utility directory.

Setup

  1. Clone the github repository or press the "Use this Template" button on GitHub!

    git clone https://github.com/minerllabs/competition_submission_starter_template.git
    
  2. Install competition specific dependencies! Make sure you have the JDK 8 installed first!

    # 1. Make sure to install the JDK first
    # -> Go to http://minerl.io/docs/tutorials/getting_started.html
    
    # 2. Install the `minerl` package and its dependencies.
    
  3. Specify your specific submission dependencies (PyTorch, Tensorflow, kittens, puppies, etc.)

    • Anaconda Environment. To make a submission you need to specify the environment using Anaconda environment files. It is also recommended you recreate the environment on your local machine. Make sure at least version 4.5.11 is required to correctly populate environment.yml (By following instructions here). Then:

      • Create your new conda environment

        conda env create -f environment.yml 
        conda activate minerl
      • Your code specific dependencies Add your own dependencies to the environment.yml file. Remember to add any additional channels. PyTorch requires channel pytorch, for example. You can also install them locally using

        conda install <your-package>
    • Pip Packages If you need pip packages (not on conda), you can add them to the environment.yml file (see the currently populated version):

    • Apt Packages If your training procedure or agent depends on specific Debian (Ubuntu, etc.) packages, add them to apt.txt.

How do I specify my software runtime ?

As mentioned above, the software runtime is specified mainly in 2 places:

  • environment.yml -- The Anaconda environment specification. If you use a conda environment to run your submission code, you can expert the exact environment.yml file with

    conda env export --no-build > environment.yml
    
  • apt.txt -- The Debian packages (via aptitude) used by your training procedure!

These files are used to construct both the local and AICrowd docker containers in which your agent will train.

If above are too restrictive for defining your environment, see this Discourse topic for more information.

What should my code structure be like ?

Please follow the example structure shared in the starter kit for the code structure. The different files and directories have following meaning:

.
├── aicrowd.json             # Submission meta information like your username
├── apt.txt                  # Packages to be installed inside docker image
├── data                     # The downloaded data, the path to directory is also available as `MINERL_DATA_ROOT` env variable
├── test_submission_code.py  # IMPORTANT: Your testing/inference phase code. NOTE: This is NOT the the entry point for testing phase!
├── train                    # Your trained model MUST be saved inside this directory
├── train_submission_code.py # IMPORTANT: Your training phase code (only needed for the Research track)
├── test_framework.py        # The entry point for the testing phase, which sets up the environment. Your code DOES NOT go here.
└── utility                  # The utility scripts which provide a smoother experience to you.
    ├── debug_build.sh
    ├── docker_run.sh
    ├── environ.sh
    ├── evaluation_locally.sh
    ├── parser.py
    ├── train_locally.sh
    └── verify_or_download_data.sh

Finally, you must specify an AIcrowd submission JSON in aicrowd.json to be scored!

The aicrowd.json of each submission should contain the following content:

{
  "challenge_id": "aicrowd-neurips-2021-minerl-diamond-challenge",
  "authors": ["your-aicrowd-username"],
  "tags": "change-me",
  "description": "sample description about your awesome agent",
  "license": "MIT",
  "gpu": true
}

This JSON is used to map your submission to the said challenge, so please remember to use the correct challenge_id as specified above.

Please specify if your code will use a GPU or not for the evaluation of your model. If you specify true for the GPU, a NVIDIA Tesla K80 GPU will be provided and used for the evaluation.

Remember: You need to specify "tags" in aicrowd.json, which need to be either "intro" or "research". This defines the track for which you are submitting.

Dataset location

You don't need to upload the MineRL dataset in submission and it will be provided in online submissions at MINERL_DATA_ROOT path. For local training and evaluations, you can download it once in your system via python ./utility/verify_or_download_data.py or place manually into ./data/ folder.

(Research track) IMPORTANT: Saving Models during Training!

Note: This only applies to the Research track

Before you submit to the Research track, make sure that your code does the following.

  • During training (train_submission_code.py) save your models to the train/ folder.
  • During testing (test_submission_code.py) load your model from the train/ folder.

It is absolutely imperative that you save your models during training (train_submission_code.py) so that they can be used in the evaluation phase (test_submission_code.py) on AICrowd, and so the organizers can verify your training code in Round 1 and train agents during Round 2!

How to submit!

To make a submission, you will have to create a private repository on https://gitlab.aicrowd.com/.

You will have to add your SSH Keys to your GitLab account by following the instructions here. If you do not have SSH Keys, you will first need to generate one.

Then you can create a submission by making a tag push to your repository on https://gitlab.aicrowd.com/. Any tag push (where the tag name begins with "submission-") to your private repository is considered as a submission
Then you can add the correct git remote, and finally submit by doing :

cd competition_submission_starter_template
# Add AIcrowd git remote endpoint
git remote add aicrowd [email protected]:<YOUR_AICROWD_USER_NAME>/competition_submission_starter_template.git
git push aicrowd master

# Create a tag for your submission and push
git tag -am "submission-v0.1" submission-v0.1
git push aicrowd master
git push aicrowd submission-v0.1

# Note : If the contents of your repository (latest commit hash) does not change,
# then pushing a new tag will **not** trigger a new evaluation.

You now should be able to see the details of your submission at: https://gitlab.aicrowd.com/<YOUR_AICROWD_USER_NAME>/competition_submission_starter_template/issues/

NOTE: Remember to update your username in the link above 😉

In the link above, you should start seeing something like this take shape (each of the steps can take a bit of time, so please be patient too 😉 ) :

and if everything works out correctly, then you should be able to see the final scores like this :

Best of Luck 🎉 🎉

Other Concepts

(Research track) Time constraints

Note: This only applies to the research track.

Round 1

You have to train your models locally with under 8,000,000 samples and with worse or comprable hardware to that above and upload the trained model in train/ directory. But, to make sure, your training code is compatible with further round's interface, the training code will be executed in this round as well. The constraints will be a timeout of 5 minutes.

Round 2

You are expected to train your model online using the training phase docker container and output the trained model in the train/ directory. You need to ensure that your submission is trained in under 8,000,000 samples and within a 4 day period. Otherwise, the container will be killed

Local evaluation

You can perform local training and evaluation using utility scripts shared in this directory. To mimic the online training phase you can run ./utility/train_locally.sh from repository root, you can specify --verbose for complete logs.

aicrowd_minerl_starter_kit❯ ./utility/train_locally.sh --verbose
2019-07-22 07:58:38 root[77310] INFO Training Start...
2019-07-22 07:58:38 crowdai_api.events[77310] DEBUG Registering crowdAI API Event : CROWDAI_EVENT_INFO training_started {'event_type': 'minerl_challenge:training_started'} # with_oracle? : False
2019-07-22 07:58:40 minerl.env.malmo.instance.17c149[77310] INFO Starting Minecraft process: ['/var/folders/82/wsds_18s5dq321scc1j531m40000gn/T/tmpnyzpjrsc/Minecraft/launchClient.sh', '-port', '9001', '-env', '-runDir', '/var/folders/82/wsds_18s5dq321scc1j531m40000gn/T/tmpnyzpjrsc/Minecraft/run']
2019-07-22 07:58:40 minerl.env.malmo.instance.17c149[77310] INFO Starting process watcher for process 77322 @ localhost:9001
2019-07-22 07:58:48 minerl.env.malmo.instance.17c149[77310] DEBUG This mapping 'snapshot_20161220' was designed for MC 1.11! Use at your own peril.
2019-07-22 07:58:48 minerl.env.malmo.instance.17c149[77310] DEBUG #################################################
2019-07-22 07:58:48 minerl.env.malmo.instance.17c149[77310] DEBUG          ForgeGradle 2.2-SNAPSHOT-3966cea
2019-07-22 07:58:48 minerl.env.malmo.instance.17c149[77310] DEBUG   https://github.com/MinecraftForge/ForgeGradle
2019-07-22 07:58:48 minerl.env.malmo.instance.17c149[77310] DEBUG #################################################
2019-07-22 07:58:48 minerl.env.malmo.instance.17c149[77310] DEBUG                Powered by MCP unknown
2019-07-22 07:58:48 minerl.env.malmo.instance.17c149[77310] DEBUG              http://modcoderpack.com
2019-07-22 07:58:48 minerl.env.malmo.instance.17c149[77310] DEBUG          by: Searge, ProfMobius, Fesh0r,
2019-07-22 07:58:48 minerl.env.malmo.instance.17c149[77310] DEBUG          R4wk, ZeuX, IngisKahn, bspkrs
2019-07-22 07:58:48 minerl.env.malmo.instance.17c149[77310] DEBUG #################################################
2019-07-22 07:58:48 minerl.env.malmo.instance.17c149[77310] DEBUG Found AccessTransformer: malmomod_at.cfg
2019-07-22 07:58:49 minerl.env.malmo.instance.17c149[77310] DEBUG :deobfCompileDummyTask
2019-07-22 07:58:49 minerl.env.malmo.instance.17c149[77310] DEBUG :deobfProvidedDummyTask
...

For local evaluation of your code, you can use ./utility/evaluation_locally.sh, add --verbose if you want to view complete logs.

aicrowd_minerl_starter_kit❯ ./utility/evaluation_locally.sh
{'state': 'RUNNING', 'score': {'score': '0.0', 'score_secondary': 0.0}, 'instances': {'1': {'totalNumberSteps': 1001, 'totalNumberEpisodes': 0, 'currentEnvironment': 'MineRLObtainDiamondVectorObf-v0', 'state': 'IN_PROGRESS', 'episodes': [{'numTicks': 1001, 'environment': 'MineRLObtainDiamondVectorObf-v0', 'rewards': 0.0, 'state': 'IN_PROGRESS'}], 'score': {'score': '0.0', 'score_secondary': 0.0}}}}
{'state': 'RUNNING', 'score': {'score': '0.0', 'score_secondary': 0.0}, 'instances': {'1': {'totalNumberSteps': 2001, 'totalNumberEpisodes': 0, 'currentEnvironment': 'MineRLObtainDiamondVectorObf-v0', 'state': 'IN_PROGRESS', 'episodes': [{'numTicks': 2001, 'environment': 'MineRLObtainDiamondVectorObf-v0', 'rewards': 0.0, 'state': 'IN_PROGRESS'}], 'score': {'score': '0.0', 'score_secondary': 0.0}}}}
{'state': 'RUNNING', 'score': {'score': '0.0', 'score_secondary': 0.0}, 'instances': {'1': {'totalNumberSteps': 3001, 'totalNumberEpisodes': 0, 'currentEnvironment': 'MineRLObtainDiamondVectorObf-v0', 'state': 'IN_PROGRESS', 'episodes': [{'numTicks': 3001, 'environment': 'MineRLObtainDiamondVectorObf-v0', 'rewards': 0.0, 'state': 'IN_PROGRESS'}], 'score': {'score': '0.0', 'score_secondary': 0.0}}}}
{'state': 'RUNNING', 'score': {'score': '0.0', 'score_secondary': 0.0}, 'instances': {'1': {'totalNumberSteps': 4001, 'totalNumberEpisodes': 0, 'currentEnvironment': 'MineRLObtainDiamondVectorObf-v0', 'state': 'IN_PROGRESS', 'episodes': [{'numTicks': 4001, 'environment': 'MineRLObtainDiamondVectorObf-v0', 'rewards': 0.0, 'state': 'IN_PROGRESS'}], 'score': {'score': '0.0', 'score_secondary': 0.0}}}}
{'state': 'RUNNING', 'score': {'score': '0.0', 'score_secondary': 0.0}, 'instances': {'1': {'totalNumberSteps': 5001, 'totalNumberEpisodes': 0, 'currentEnvironment': 'MineRLObtainDiamondVectorObf-v0', 'state': 'IN_PROGRESS', 'episodes': [{'numTicks': 5001, 'environment': 'MineRLObtainDiamondVectorObf-v0', 'rewards': 0.0, 'state': 'IN_PROGRESS'}], 'score': {'score': '0.0', 'score_secondary': 0.0}}}}
{'state': 'RUNNING', 'score': {'score': '0.0', 'score_secondary': 0.0}, 'instances': {'1': {'totalNumberSteps': 6001, 'totalNumberEpisodes': 0, 'currentEnvironment': 'MineRLObtainDiamondVectorObf-v0', 'state': 'IN_PROGRESS', 'episodes': [{'numTicks': 6001, 'environment': 'MineRLObtainDiamondVectorObf-v0', 'rewards': 0.0, 'state': 'IN_PROGRESS'}], 'score': {'score': '0.0', 'score_secondary': 0.0}}}}
...

For running/testing your submission in a docker environment (identical to the online submission), you can use ./utility/docker_train_locally.sh and ./utility/docker_evaluation_locally.sh. You can also run docker image with bash entrypoint for debugging on the go with the help of ./utility/docker_run.sh. These scripts respect following parameters:

  • --no-build: To skip docker image build and use the last build image
  • --nvidia: To use nvidia-docker instead of docker which include your nvidia related drivers inside docker image

Team

The quick-start kit was authored by Anssi Kanervisto and Shivam Khandelwal with help from William H. Guss

The competition is organized by the following team:

  • William H. Guss (OpenAI and Carnegie Mellon University)
  • Alara Dirik (Boğaziçi University)
  • Byron V. Galbraith (Talla)
  • Brandon Houghton (OpenAI and Carnegie Mellon University)
  • Anssi Kanervisto (University of Eastern Finland)
  • Noboru Sean Kuno (Microsoft Research)
  • Stephanie Milani (Carnegie Mellon University)
  • Sharada Mohanty (AIcrowd)
  • Karolis Ramanauskas
  • Ruslan Salakhutdinov (Carnegie Mellon University)
  • Rohin Shah (UC Berkeley)
  • Nicholay Topin (Carnegie Mellon University)
  • Steven H. Wang (UC Berkeley)
  • Cody Wild (UC Berkeley)

competition_submission_template's People

Contributors

brandonhoughton avatar emres avatar evgenykashin avatar madcowd avatar miffyli avatar skbly7 avatar thisisisaac avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

competition_submission_template's Issues

Pretrain model size limit

Hello, I am one of competition participant

I want to upload my Tensorflow model weight file with a competition_submission_starter_template.
But, I can not upload it on a Gitlab of Aicrowd because of limit uploading size.

I try to find a way for extending that limit. However, I can not find it.

Please help me!
Thank you.

Fixed typos

There were some typos including an erroneous instruction.

#2

Download data error

Attempting to download the dataset...
Server does not support HTTPRange. threads_count is set to 1.
URL error encountered when downloading - please try again
None
Data verified! A+!

utility/docker_train_locally.sh: line 36: PWD: command not found

I believe, the lines linked below in the docker utility files should say ${PWD} (curly braces, using the environment variable that is automatically set in most shels) or $(pwd) (round parentheses, executing the command-line tool and fetching it's output) instead of $(PWD) (tries to call a non-existing command-line tool).

See these highlights:
https://github.com/minerllabs/competition_submission_starter_template/blob/53ad35a8cff9ee217d5ab58313800dfd12142f8c/utility/docker_evaluation_locally.sh#L29-L31
https://github.com/minerllabs/competition_submission_starter_template/blob/53ad35a8cff9ee217d5ab58313800dfd12142f8c/utility/docker_train_locally.sh#L28-L30

How do I get ground truth segmentation readings in MineRL?

It is possible to obtain ground truth segmentation readings / channel in Malmo where pixels with different texture-pack tiles have different IDs / color representations.

How can I get the same segmentation readings / channel in MineRL?

How to replace tileset of selected objects in MineRL environments?

Can anyone advise how we can replace tileset of selected objects in MineRL environments? Could you please share helpful links that explain how to do this (e.g. GitHub permalinks, etc.)?

Can we change the tileset in prerecorded episodes from the Imitation Learning (human player) dataset (https://minerl.io/dataset)?

We want to create a binary mask channel for some objects to test our approach. To do this, we plan to replace a set of tiles of some objects with a unique color in order to create masks for these objects using a simple color threshold. Or maybe there is an easier way to get such object related masks?

train_locally.py error

When running train_locally.py, the script print "{'state': 'ERROR', 'score': {}, 'instances': [], 'reason': 'You started more instances (2) then allowed limit (1).'}". Is this an intended behavior?

K80 or P100?

Is the GPU used for training K80 or P100?

One line says:

(6 CPU cores, 112 GiB RAM, 736 GiB SDD, and a single NVIDIA P100 GPU.)

But another line says:

a NVIDIA Tesla K80 GPU will be provided and used for the evaluation.

missing run.sh

The readme says run.sh is the entry point but there is no such file in the repo.

Unable to find aicrowd/neurips2019-minerl-challenge image

I would like to train locally with this script:

./utility/docker_train_locally.sh

However it is unable to pull the docker image since it is not online or public.

Unable to find image 'aicrowd/neurips2019-minerl-challenge:agent' locally docker: Error response from daemon: pull access denied for aicrowd/neurips2019-minerl-challenge, repository does not exist or may require 'docker login': denied: requested access to the resource is denied. See 'docker run --help'.

Is there a way to get this image?

parser inside train.py takes no effect

it seems that changes to the parser object in train.py have no effect. The script always use parser in the utility/parser.py ( called by python3 utility/parser.py || true in the train_locally.sh)

Problems about the competition startkit

It can run in a docker envinment when I use xvfb-run -s "-ac -screen 0 1280x1024x24" python train.py
But I get the following error when I use the script in competiton startkit repo, I use the bash xvfb-run -s "-ac -screen 0 1280x1024x24" ./utility/evaluate_locally.sh, it will get the following error:

Traceback (most recent call last):
File "/root/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/root/anaconda3/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/root/anaconda3/lib/python3.6/site-packages/minerl/env/malmo.py", line 896, in keep_alive_pyro
InstanceManager.add_keep_alive(os.getpid(), callback)
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 275, in getattr
self._pyroGetMetadata()
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 615, in _pyroGetMetadata
self.__pyroCreateConnection()
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 588, in __pyroCreateConnection
uri = _resolve(self._pyroUri, self._pyroHmacKey)
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 1911, in _resolve
return nameserver.lookup(uri.object)
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 185, in call
return self.__send(self.__name, args, kwargs)
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 476, in _pyroInvoke
raise data # if you see this in your traceback, you should probably inspect the remote traceback as well
Pyro4.errors.NamingError: unknown name: minerl.instance_manager

Traceback (most recent call last):
File "run.py", line 2, in
import train
File "/home/user/competition/train.py", line 14, in
from envs import diamond_env_creator
File "/home/user/competition/envs.py", line 10, in
from hack import minerl
File "/home/user/competition/hack.py", line 74, in
minerl.env.malmo.InstanceManager.get_instance = get_instance
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 291, in setattr
self._pyroGetMetadata()
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 615, in _pyroGetMetadata
self.__pyroCreateConnection()
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 588, in __pyroCreateConnection
uri = _resolve(self._pyroUri, self._pyroHmacKey)
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 1911, in _resolve
return nameserver.lookup(uri.object)
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 185, in call
return self.__send(self.__name, args, kwargs)
File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 476, in _pyroInvoke
raise data # if you see this in your traceback, you should probably inspect the remote traceback as well
Pyro4.errors.NamingError: unknown name: minerl.instance_manager
+--- This exception occured remotely (Pyro) - Remote traceback:
| Traceback (most recent call last):
| File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/naming.py", line 91, in lookup
| uri, metadata = self.storage[name]
| KeyError: 'minerl.instance_manager'
|
| During handling of the above exception, another exception occurred:
|
| Traceback (most recent call last):
| File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/core.py", line 1421, in handleRequest
| data = method(*vargs, **kwargs) # this is the actual method call to the Pyro object
| File "/root/anaconda3/lib/python3.6/site-packages/Pyro4/naming.py", line 98, in lookup
| raise NamingError("unknown name: " + name)
| Pyro4.errors.NamingError: unknown name: minerl.instance_manager

How do I get a ground truth deep channel in MineRL in float point absolute values similar to Malmo?

In Malmo, we can get the depth ​​in floating point absolute values (blocks units) for each pixel.

We want to get the same absolute floating point depth values in MineRL, but have not yet found a way to get it in MineRL.

In MineRL, we can get depth only in the following two ways:

  1. Normalised depth readings from 0 to 1 for each frame, which does not allow comparing the depth of different frames.
  2. Logarithmic depth representation of integer values ​​from 0 to 255. However, in this case, most of the values ​​are in the range from 220 to 250, which results in a downgraded depth representation with about 30 discrete values.

Could someone advice how to get depth values in MineRL in floating point absolute values (blocks units) for each pixel?

Links:

MineRL has this added in the XML
https://microsoft.github.io/malmo/0.30.0/Schemas/MissionHandlers.html#element_VideoProducer
and get this value from the environment obs
https://minerl.io/docs/environments/handlers.html#visual-observations-pov-third-persion
which uses the buffer
https://github.com/microsoft/malmo/blob/26433ad2e60035726232ab54a3dac044dea9724f/Minecraft/src/main/java/com/microsoft/Malmo/MissionHandlers/VideoProducerImplementation.java

Malmo XML
https://github.com/microsoft/malmo/blob/26433ad2e60035726232ab54a3dac044dea9724f/Malmo/samples/Python_examples/radar_test.py#L189
And get the depth by createAgent,
https://github.com/microsoft/malmo/blob/26433ad2e60035726232ab54a3dac044dea9724f/Malmo/samples/Python_examples/radar_test.py#L49
get frame,
https://github.com/microsoft/malmo/blob/26433ad2e60035726232ab54a3dac044dea9724f/Malmo/samples/Python_examples/radar_test.py#L240
and then extract the depth
https://github.com/microsoft/malmo/blob/26433ad2e60035726232ab54a3dac044dea9724f/Malmo/samples/Python_examples/radar_test.py#L111

Only 'MineRLObtainDiamond-v0' is allowed durring training

Currently the train_locally.sh prevents participants from using any other environment via the
check_for_allowed_environment(self, environment, payload): call. For training we should allow participants to use all environments in Minerl

Question regarding maximum file size (for trained agent models)

Hello!

My team and I are really struggling to upload an agent that is smaller than the maximum size allowed by Gitlab (10MB), as our model weighs around 96MB.

We were wondering if we can use git lfs (large file system), to upload our agent. This will mean that these two commands: git lfs install and gi lfs pull should be run on the machine running the test code for Round 1.

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.