Code Monkey home page Code Monkey logo

active_vln's Introduction

Active VLN

This repository is the implementation of our ECCV 2020 paper:

Active Visual Information Gathering for Vision-Language Navigation

Hanqing Wang, Wenguan Wang, Tianmin Shu, Wei Liang, Jianbing Shen.


Introduction

This work draws inspiration from human navigation behavior and endows an agent with an active information gathering ability for a more intelligent vision-language navigation policy.

To achieve this, we develop an active exploration module, which learns to 1) decide when the exploration is necessary, 2) identify which part of the surroundings is worth exploring, and 3) gather useful knowledge from the environment to support more robust navigation.

Please refer to our paper for the detailed formulations.

Results

Here are some results on R2R dataset reported in our paper.

Single Run Setting

Set SR↑ NE↓ TL↓ OR↑ SPL↑
Validation Seen 0.70 3.20 19.7 0.80 0.52
Validation Unseen 0.58 4.36 20.6 0.70 0.40
Test Unseen 0.60 4.33 21.6 0.71 0.41

Pre-explore Setting

Set SR↑ NE↓ TL↓ OR↑ SPL↑
Test Unseen 0.70 3.30 9.85 0.77 0.68

Beam-Search Setting

Set SR↑ TL↓ SPL↑
Test Unseen 0.71 176.2 0.05

Please refer to our paper for the comparsions with previous arts.

Environment Installation

  1. Install Jupyter Install jupyter using the following scripts. pip install jupyter

  2. Install R2R environment via Jupyter Our code is built basing on R2R-EnvDrop, please install the R2R environment for the python interpreter used in Jupyter following the installation instructions.

Quick Start

Inference:

  1. Download the checkpoint of the agent to directory snap/agent/state_dict/best_val_unseen. The checkpoint is available on Google Drive.
  2. Start a Jupyter service and run the jupyter notebook evaluation.ipynb.

TODO

  • Release the checkpoint.
  • Add training code.

Citation

Please cite this paper in your publications if it helps your research:

@inproceedings{wang2020active,
    title={Active Visual Information Gathering for Vision-Language Navigation},
    author={Wang, Hanqing and Wang, Wenguan and Shu, Tianmin and Liang, Wei and Shen, Jianbing},
    booktitle=ECCV,
    year={2020}
}

License

Active VLN is freely available for non-commercial use, and may be redistributed under these conditions. Please see the license for further details. For commercial license, please contact the authors.

Contact Information

  • hanqingwang[at]bit[dot]edu[dot]cn, Hanqing Wang
  • wenguanwang[dot]ai[at]gmail[dot]com, Wenguan Wang

active_vln's People

Contributors

hanqingwangai avatar

Stargazers

 avatar

Watchers

paper2code - bot avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.