Code Monkey home page Code Monkey logo

behaviorchallenge2021's Introduction

BEHAVIOR Challenge @ ICCV2021

This repository contains starter code for BEHAVIOR Challenge 2021 brought to you by Stanford Vision and Learning Lab. For an overview of the challenge and the workshop and information about the tasks, evaluation metrics, datasets and setup, visit the challenge website.

For more information or questions, contact us at [email protected]

Participation Guidelines

In the following, we summarize the most relevant points to participate. For a full description, visit the online BEHAVIOR Participation Guidelines.

Participate in the contest by registering on the EvalAI challenge page and creating a team. In the Minival and Evaluation phases, participants will upload docker containers with their agents that evaluated on a AWS GPU-enabled instance. Before pushing the submissions for remote evaluation, participants should test the submission docker locally to make sure it is working. Instructions for training, local evaluation, and online submission are provided below.

Local Evaluation

  • Step 1: Clone the challenge repository

    git clone https://github.com/StanfordVL/BehaviorChallenge2021.git
    cd BehaviorChallenge2021

    Two example agents are provided in simple_agent.py and rl_agent.py: RandomAgent and PPOAgent. We also provide randomly initialized checkpoints for PPOAgent, stored in checkpoints/. Please implement your own agent and instantiate it from agent.py.

  • Step 2: Install nvidia-docker2, following the guide: https://github.com/nvidia/nvidia-docker/wiki/Installation-(version-2.0).

  • Step 3: Modify the provided Dockerfile to accommodate any dependencies. A minimal Dockerfile is shown below.

    FROM igibson/behavior_challenge_2021:latest
    ENV PATH /miniconda/envs/gibson/bin:$PATH
    
    ADD agent.py /agent.py
    ADD simple_agent.py /simple_agent.py
    ADD rl_agent.py /rl_agent.py
    
    ADD submission.sh /submission.sh
    WORKDIR /

    Then build your docker container with docker build . -t my_submission , where my_submission is the docker image name you want to use.

  • Step 4:

    Download the challenge data by completing the user agreement (https://forms.gle/ecyoPtEcCBMrQ3qF9). Place ig_dataset and igibson.key under BehaviorChallenge2021.

  • Step 5:

    Evaluate locally for minival split:

    You can run ./test_minival_locally.sh --docker-name my_submission

    Evaluate locally for dev split:

    You can run ./test_dev_locally.sh --docker-name my_submission

  • Step 6 (dev phase only):

    For dev phase, we ask participants to evaluate their own results and add evaluation results to docker:

    You can run docker build . -f Dockerfile_add_results -t my_submission_with_results

Online submission

Follow instructions in the submit tab of the EvalAI challenge page to submit your docker image. Note that you will need a version of EvalAI >= 1.2.3. Here we reproduce part of those instructions for convenience:

# Installing EvalAI Command Line Interface
pip install "evalai>=1.2.3"

# Set EvalAI account token
evalai set_token <your EvalAI participant token>

# Push docker image to EvalAI docker registry
# for minival and test
evalai push my_submission:latest --phase <phase-name>
# for dev
evalai push my_submission_with_results:latest --phase <phase-name>

The valid challenge phases are: behavior-minival-onboard-sensing-1190, behavior-minival-full-observability-1190, behavior-dev-onboard-sensing-1190, behavior-dev-full-observability-1190, behavior-test-onboard-sensing-1190, behavior-test-full-observability-1190.

Our BEHAVIOR Challenge 2021 consists of four phases:

  • Minival Phase (behavior-minival-onboard-sensing-1190, behavior-minival-full-observability-1190): The purpose of this phase to make sure your policy can be successfully submitted and evaluated. Participants are expected to download our starter code and submit a baseline policy, even a trivial one, to our evaluation server to verify their entire pipeline is correct. The submission will only be evaluated on one activity.
  • Dev Phase (behavior-dev-onboard-sensing-1190, behavior-dev-full-observability-1190): This phase is split into Onboard Sensing and Full Observability tracks. Participants are expected to submit their solutions to each of the tasks separately because they have different observation spaces. The results will be evaluated on the dataset dev split and the leaderboard will be updated within 24 hours.
  • Test Phase (behavior-test-onboard-sensing-1190, behavior-test-full-observability-1190): This phase is also split into Onboard Sensing and Full Observability tracks. Participants are expected to submit a maximum of 5 solutions during the last few weeks of the challenge. The solutions will be evaluated on the dataset test split and the results will NOT be made available until the end of the challenge.

Training

Using Docker

Train with minival split (with only one of the activities): ./train_minival_locally.sh --docker-name my_submission.

Note that due to the difficulty of BEHAVIOR activities, the default training with PPO will NOT converge to success. We provide this training pipeline just as a starting point for participants to further build upon.

Not using Docker

  • Step 0: install anaconda and create a python3.6 environment

    conda create -n igibson python=3.6
    conda activate igibson
    
  • Step 1: install CUDA and cuDNN. We tested with CUDA 10.0 and 10.1 and cuDNN 7.6.5

  • Step 2: install EGL dependency

    sudo apt-get install libegl1-mesa-dev
    
  • Step 3: install iGibson from source by following the documentation.

  • Step 4: Download the challenge data by completing the user agreement (https://forms.gle/ecyoPtEcCBMrQ3qF9), and place ig_dataset and igibson.key under igibson/data.

  • Step 5: start training with stable-baselines3!

    cd iGibson/igibson/examples/demo
    python stable_baselines3_behavior_example.py
    

    This will train with one activity in one scene, defined in behavior_onboard_sensing.yaml.

Feel free to skip Step 5 if you want to use other frameworks for training. This is just a example starter code for your reference.

Acknowledgments

BEHAVIOR Challenge and its underlying simulation engine iGibson use code from a few open source repositories. Without the efforts of these folks (and their willingness to release their implementations under permissable copyleft licenses), BEHAVIOR would not be possible. We thanks these authors for their efforts!

behaviorchallenge2021's People

Contributors

cdarpino avatar chengshuli avatar fxia22 avatar roberto-martinmartin avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.