Code Monkey home page Code Monkey logo

dfc2019's Introduction

2019 IEEE Data Fusion Contest data, baselines, and metrics

For more information visit IEEE's 2019 Data Fusion Contest.

Cloning this repository

This repository contains large model files used by baseline algorithms. The model files do not need to be downloaded, but are provided as a convenience. Please use git lfs when cloning to have access to these models. If you did not install and initialize git lfs before cloning, you can simply run git lfs fetch after locally initializing git lfs.

Contest Tracks

Track 1: Single View Semantic 3D

In Track 1 an unrectified single-view image is provided for each geographic tile. The objective is to predict semantic labels and above-ground heights (meters).

Track 2: Pairwise Semantic Stereo

In Track 2 a pair of epipolar rectified images is given, and the objective is to predict semantic labels and stereo disparities (pixels).

Track 3: Multi-View Semantic Stereo

In Track 3 the goal is to predict semantic labels and a digital surface model given several multi-view unrectified images associated with a pre-computed geometry model to focus on the data fusion problem and not on registration. Example python code is provided in the baseline solution to demonstrate epipolar rectification, triangulation, and coordinate conversion for the satellite images.

Track 4: 3D Point Cloud Classification

In Track 4 the aim is to label points from the given aerial point cloud according to several predetermined semantic classes. For this track only, performance is assessed using standard mIoU.

Performance

For tracks 1-3, performance is assessed using the pixel-wise mean Intersection over Union (mIoU) for which true positives must have both the correct semantic label and height error less than a given threshold (1 meter for heights or 3 pixels for disparities). We call this metric mIoU-3.

Data

JHU/APL has provided a large data set, including ground truth, for training and testing. Instructions for acquiring and using the data are located in the data directory. Data is also available at IEEE DataPort (https://ieee-dataport.org/open-access/data-fusion-contest-2019-dfc2019).

Baseline algorithms

JHU/APL has developed baseline implementations in python for each challenge track to demonstrate how to manipulate the challenge data and produce valid submission files. These baselines are available in the Track 1-4 folders referenced above.

Submission requirements

Submissions must match the reference file formats and data types and must be readable by scipy.misc.imread. Please check your files before submitting. There is no requirement to use a particular language for producing submissions.

Acknowledgements

The authors are grateful to the IEEE GRSS IADF committee chairs – Bertrand Le Saux, Ronny Hänsch, and Naoto Yokoya – for their collaboration in leveraging this work to enable public research. Commercial satellite images were provided courtesy of DigitalGlobe. U. S. Cities LiDAR and vector data were made publicly available by the Homeland Security Infrastructure Program. Geomni LiDAR and oblique imagery will be made available publicly for single use research purposes. This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) contract no. 2017-17032700004. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA or the U.S. Government.

Reference

For more information on the data: Marc Bosch, Kevin Foster, Gordon Christie, Sean Wang, Gregory D. Hager, and Myron Brown, "Semantic Stereo for Incidental Satellite Images," Proc. of Winter Conf. on Applications of Computer Vision, 2019.

dfc2019's People

Contributors

pubgeo avatar slipperysaxophone avatar zhaobaiyu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dfc2019's Issues

An error in Track4

the interface.py needs "import provider", but in the docker image, the python version is 3.5 which is lower than the minimum requirement(python >=3.6) of the "provider". Does anyone meet the same problem?

about track4 pointnet2 training data

Hello,
I retrained pointnet2 with provided codes on 99 point clouds which are produced by create_train_dataset.py and tested on 11 validation point clouds which are produced by create_train_dataset.py. But it is 6% lower than provided trained model. Does the baseline use a different split?

Thank you very much!

AttributeError: module 'tensorflow._api.v2.image' has no attribute 'resize_bilinear'

After setting up anaconda with conda env create --name dfc2019 --file=dev-gpu.yml
and running

conda activate dfc2019
cd track3
python ./mvs/test-mvs.py

I receive the following error message

Traceback (most recent call last):
  File "./mvs/test-mvs.py", line 693, in <module>
    predictor.build_seg_model(seg_weights_file)
  File "./mvs/test-mvs.py", line 103, in build_seg_model
    self.seg_model = build_icnet(self.height, self.width, self.bands, self.num_categories + 1,
  File "/mnt/Data-512GB/libraries_ml_geo/dfc2019/dfc2019/track3/mvs/model_icnet.py", line 52, in build_icnet
    y = Lambda(lambda x: tf.image.resize_bilinear(x, size=(int(x.shape[1])//2, int(x.shape[2])//2)), name='data_sub2')(inp)
  File "/home/sebastian/miniconda3/envs/dfc2019/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 922, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/home/sebastian/miniconda3/envs/dfc2019/lib/python3.8/site-packages/tensorflow/python/keras/layers/core.py", line 888, in call
    result = self.function(inputs, **kwargs)
  File "/mnt/Data-512GB/libraries_ml_geo/dfc2019/dfc2019/track3/mvs/model_icnet.py", line 52, in <lambda>
    y = Lambda(lambda x: tf.image.resize_bilinear(x, size=(int(x.shape[1])//2, int(x.shape[2])//2)), name='data_sub2')(inp)
AttributeError: module 'tensorflow._api.v2.image' has no attribute 'resize_bilinear'

ValueError('all input arrays must have the same shape') in create_train_dataset.py

I am trying to run pointnet2 on my own data set, which has got a '_PC3.txt' file with xyzRGB values and a corresponding '_CLS.txt' file. I am getting a error like this when I ran create_train_dataset.py
image
But i checked the shape of CLS and PC3 file, both are same. Is it a problem because of multi-threading? If so how can I rectify it?
I tried in the LiNUX environment as well. But it gave the same error.
Please help

Weight files download

When I git lfs clone this repo, the result is:
This repository is over its data quota. Account responsible for
LFS bandwidth should purchase more data packs to restore access.

Is there any other way to download the weight file? Thank you!

From where can I download the whole datasets please?

Hello there, it's been several days since the release of the training and validation dataset. Yet I couldn't pull these data to the local till now, though I had registered on the official website to make the links visible, all as guided. For the first few days, the links contained in the BT files seemed invalid, as the resources couldn't be connected. And today found the links to the BT files and the net disks on the official page all gone.

As is the case, could you please show me another way to download the dataset? Sorry for any possible trouble on reading caused by languages issues as my English is not so good.

Unrectified images and corrected RPC for Track2

Where can I get access to unrectified images and corrected RPC or affine camera models?

The following is mentioned on page4 of the publication "Metadata such as RPC, epipolar rectifying homographies, and collection dates are retained for each stereo pair" I am not able to find RPC parameters in METADATA.json files or any of the .tif files.

Track 1-3 Metrics

Memory error preventing 3d component of track1-3 metrics from being released. Updated metrics are on the way.

Including RGB in the .las

@pubgeo I am using this version of Pointnet++, my dataset has yxzRGB values in it. But when i tried to feed in the RGB values to the .las using the laspy somehow it is not happening.
Can you help please

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.