Code Monkey home page Code Monkey logo

sat2graph's Introduction

Sat2Graph

Sat2Graph: Road Graph Extraction through Graph-Tensor Encoding

Abstract

Inferring road graphs from satellite imagery is a challenging computer vision task. Prior solutions fall into two categories: (1) pixel-wise segmentation-based approaches, which predict whether each pixel is on a road, and (2) graph-based approaches, which predict the road graph iteratively. We find that these two approaches have complementary strengths while suffering from their own inherent limitations.

In this paper, we propose a new method, Sat2Graph, which combines the advantages of the two prior categories into a unified framework. The key idea in Sat2Graph is a novel encoding scheme, graph-tensor encoding (GTE), which encodes the road graph into a tensor representation. GTE makes it possible to train a simple, non-recurrent, supervised model to predict a rich set of features that capture the graph structure directly from an image. We evaluate Sat2Graph using two large datasets. We find that Sat2Graph surpasses prior methods on two widely used metrics, TOPO and APLS. Furthermore, whereas prior work only infers planar road graphs, our approach is capable of inferring stacked roads (e.g., overpasses), and does so robustly.

Overview

About

All the pretrained models, the dataset, and the docker container are for non-commercial academic use only.

Change Log

2023-04-08 Updates.

  • For SpaceNet, please find the dataset split, the pre-processed dataset and the sat2graph outputs from this link.
  • The demo website is no longer working. Please check out the Sat2Graph docker container for demo.

2021-04-14 --- Sat2Graph inference server docker container

  • Containerize Sat2Graph inference server. Now you can try four Sat2Graph models and three segmentation models (unet, deeproadmapper, and joint orientation learning) in one container.
  • The containerized inference server supports two inference modes. (1) Given a lat/lon coordinate and the size of the tile, the inference server can automatically download MapBox images and run inference on it. (2) Run on custom input images as long as the ground sampling distance (e.g., 50 cm/pixel) is provided.
  • Check it out here!

2021-04-12 --- New models

  • Add new global models to our demo. Now you can run Sat2Graph in a larger window (1km) with the new global models.
  • The new models classify the road segments into three categories -- freeway roads, traffic roads and service roads (e.g., parking roads and foot paths). update20210412

2021-04-07 --- New demo interface

  • Update the web portal of our demo.
  • Check out our new experimental Sat2Graph model (Still updating)!

Run Sat2Graph at any place on Earth! (Link).

Try Sat2Graph in iD editor (link). Watch the demo.

Demo2

Instruction

  • Use mouse to pan/zoom
  • Press 's' to run Sat2Graph (this will take a few seconds)
  • Press 'd' to toggle background brightness
  • Press 'c' to clear the results
  • Press 'm' to switch model

Supported Models

Model Note
80-City Global Trained on 80 cities around the world. This model is 2x wider than the 20-city US model.
20-City US Trained on 20 US cities. This is the model evaluated in our paper.
20-City US V2 Trained on 20 US cities at 50cm resolution. This is an experimental model and it performs poorly at places where high resolution satellite imagery is not available.
Global-V2 Trained on 80 cities at 50cm resolution. When apply this model to a new place, it takes around 17 seconds for the server to download the images and takes another 30 seconds for inference (1km by 1km).

Usage

Download the Dataset and Pre-Trained Model

(The following script is no longer work, please download from here)

./download.sh

This script will download the full 20-city dataset we used in the paper as well as the pre-trained model. It will also download the dataset partition (which tiles are used for training/validating/testing) we used in the paper for SpaceNet Road dataset.

Generate outputs from the pre-trained model

20-city dataset

cd model
python train.py -model_save tmp -instance_id test -image_size 352 -model_recover ../data/20citiesModel/model -mode test

This command will generate the output graphs for the testing dataset. You can check out the graphs and visualizations in the 'output' folder.

SpaceNet dataset

TODO

Training Sat2Graph Model

20-city dataset

To train the model on the 20-city dataset, use the following command.

python train.py -model_save tmp -instance_id test -image_size 352

SpaceNet dataset

Please email me at [email protected] for the dataset split, the pre-processed dataset, and the sat2graph outputs.

Please find the dataset split, the pre-processed dataset and the sat2graph outputs from this link.

APLS and TOPO metrics

Please see the 'metrics' folder for the details of these two metrics.

sat2graph's People

Contributors

songtaohe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

sat2graph's Issues

Inquiry about GPSDistance Formula and Theory

Could any please explain the theory behind the formula used in the GPSDistance function? I'm curious about its calculation and why it was chosen for the project.

Thanks for your time and expertise!

Train network with custom image size

Hi,

Thanks for your work. I've used the script for training the model at the 2048 dataset image size and 352 input image size but I'm struggling to get it to work with sizes different than those.

Is it possible? The images from my dataset are 512 x 512 and lowres, so I'd like to experiment with input images smaller than 352 x 352.

apls metric

Hi, very impressive work here! When I use "go run main.go ***" to evaluate the example files in terms of APLS metric, it works fine. Then I convert model outputs for test set as well as gt graphs to .json format, and then use go command to evaluate them, I got bugs. I checked that the converted output and gt jsons are in the same format as the converted example files. Is there any other variables I should modify? Do you know what's wrong?

there is an package error in prepare_data/download.py

Thank you very much for the open source code, there is an error that ModuleNotFoundError: No module named 'mapdriver' when run prepare_data/download.py. Is package--mapdriver is your own file,because I could not install it.Looking forward to your reply

> This is really a great job, and thank you so much for sharing your source code. Here I have a question.

This is really a great job, and thank you so much for sharing your source code. Here I have a question.
If I want to try Sat2Graph on my datasets, how to obtain the sample points (_refine_gt_graph_samplepoints.json) and the neighbors (_refine_gt_graph.p) from the ground-truth (_gt.png) please?
Appreciate your help!

In the current implementation, we take the ground truth graph (from OpenStreetMap) as input (in graph format) and generate the corresponding segmentation mask (_gt.png), the sample points (_refine_gt_graph_samplepoints.json), and the interpolated ground truth graphs (_refine_gt_graph.p). For this part, you can check the code in prepare_dataset/download.py

If your ground-truth is in segmentation format, then you may have to first convert it to graph format. Unfortunately, there is no code in this repo. I can try to add one if you need it.

The code to create the sample points and the refined ground truth graphs (_refine_gt_graph.p).

Originally posted by @songtaohe in #2 (comment)

Which version of Deep Layer Aggregation (DLA) was used?

From the code in model.py, I could not find out which version of the Deep Layer Aggregation architecture was used for the Sat2Graph model. In your paper I see only mentioned that you used residual blocks for the aggregation function.

Did you use one of the versions presented in the DLA paper or did you implement an architecture of your own?

training environment

Hi, really cool work here. Could you specify the training environment for this project? i understand that you are usign tensorflow 14.0 but is it using any specific cuda and cudnn ?

Coordinates of inferencer is missing a rescale

I ran the sample docker inference and obtained the mask below:

image

It appears there's a scale transform that I am missing. What post processing is needed to get the image to the right shape?

model file errors TypeError: Value passed to parameter 'paddings' has DataType float32 not in list of allowed values: int32, int64

I want to work model files but I keep getting this error (train.py , localserver.py ..)

Traceback (most recent call last):
File "localserver.py", line 22, in
model = Sat2GraphModel(sess, image_size=352, resnet_step = 8, batchsize = 1, channel = 12, mode = "test")
File "/content/Sat2Graph/model/model.py", line 66, in init
self.imagegraph_output = self.BuildDeepLayerAggregationNetWithResnet(self.input_sat, input_ch = image_ch, output_ch =2 + MAX_DEGREE * 4 + (2 if self.joint_with_seg==True else 0), ch=channel)
File "/content/Sat2Graph/model/model.py", line 342, in BuildDeepLayerAggregationNetWithResnet
conv1, _, _ = common.create_conv_layer('cnn_l1', net_input, input_ch, ch, kx = 5, ky = 5, stride_x = 1, stride_y = 1, is_training = self.is_training, batchnorm = False)
File "/content/Sat2Graph/model/tf_common_layer.py", line 56, in create_conv_layer
input_tensor = tf.pad(input_tensor, [[0, 0], [kx/2, kx/2], [kx/2, kx/2], [0, 0]], mode="CONSTANT")
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/array_ops.py", line 2840, in pad
result = gen_array_ops.pad(tensor, paddings, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/gen_array_ops.py", line 6399, in pad
"Pad", input=input, paddings=paddings, name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/op_def_library.py", line 632, in _apply_op_helper
param_name=input_name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/op_def_library.py", line 61, in _SatisfiesTypeConstraint
", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
TypeError: Value passed to parameter 'paddings' has DataType float32 not in list of allowed values: int32, int64

the problem about rtreego

when i run main.go. i meets the questions:

apls/main.go:107:49: cannot use rtreego.Point literal.ToRect(tol) (type rtreego.Rect) as type *rtreego.Rect in return argument
apls/main.go:356:13: cannot use &gNode (type *gpsnode) as type rtreego.Spatial in argument to rt.Insert:
*gpsnode does not implement rtreego.Spatial (wrong type for Bounds method)
have Bounds() *rtreego.Rect
want Bounds() rtreego.Rect
apls/main.go:373:28: impossible type assertion:
*gpsnode does not implement rtreego.Spatial (wrong type for Bounds method)
have Bounds() *rtreego.Rect
want Bounds() rtreego.Rect
apls/main.go:377:25: impossible type assertion:
*gpsnode does not implement rtreego.Spatial (wrong type for Bounds method)
have Bounds() *rtreego.Rect
want Bounds() rtreego.Rect
apls/main.go:378:36: impossible type assertion:
*gpsnode does not implement rtreego.Spatial (wrong type for Bounds method)
have Bounds() *rtreego.Rect
want Bounds() rtreego.Rect
apls/main.go:381:32: impossible type assertion:
*gpsnode does not implement rtreego.Spatial (wrong type for Bounds method)
have Bounds() *rtreego.Rect
want Bounds() rtreego.Rect
have you meet the same question?thank you

system freeze when using custom images

HI, really interesting project.

I'm having some problem running the CPU docker version.
it's fine when using
python infer_custom_input.py -input sample.png -gsd 0.5 -model_id 2 -output out.json
written in the instruction.

but when I give it a different file, the whole system crush,
python infer_custom_input.py -input test.png -gsd 0.5 -model_id 2 -output out.json
(I cut the image to match the size with sample.png 704*704)

and this is what I get on docker side

(704, 704, 3)
INFO:root:POST request,
Path: /
Headers:
Host: localhost:8007
Connection: keep-alive
Accept-Encoding: gzip, deflate
Accept: */*
User-Agent: python-requests/2.25.1
Content-Length: 311




Progress (50.0%) >>>>>>>>>>>>>>>>>>>>--------------------('GPU time (pass 1):', 3.8675999641418457)
('Decode time (pass 1):', 0.06645011901855469)
Progress (100.0%) >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>('GPU time (pass 2):', 3.893293857574463)
('Decode time (pass 2):', 0.06614208221435547)
begin

it stop at "begin" forever!

And another thing, is there any specific input format for the custom file?
seems like only take 24 bit-depth png?

any suggestion will be helpful, thanks

Connection error

I cloned the repo inside the docker and I am running it for custom image from inside the docker I ran :

root@27442c0c7c4c:/usr/src/app/Sat2Graph/docker/scripts# python infer_custom_input.py -input /usr/src/app/BIAL_train/1.png -gsd 0.5 -model_id 3 -output /usr/src/app/BIAL_train/out1.json

But I keep on getting the error :

<type 'str'>
Traceback (most recent call last):
  File "infer_custom_input.py", line 56, in <module>
    x = requests.post(url, data = json.dumps(msg))
  File "/root/.local/lib/python2.7/site-packages/requests/api.py", line 119, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/root/.local/lib/python2.7/site-packages/requests/api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "/root/.local/lib/python2.7/site-packages/requests/sessions.py", line 542, in request
    resp = self.send(prep, **send_kwargs)
  File "/root/.local/lib/python2.7/site-packages/requests/sessions.py", line 655, in send
    r = adapter.send(request, **kwargs)
  File "/root/.local/lib/python2.7/site-packages/requests/adapters.py", line 498, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine('No status line received - the server has closed the connection',))

Please guide.

Experiment on other data sets

This is really a great job, and thank you so much for sharing your source code. Here I have a question.
If I want to try Sat2Graph on my datasets, how to obtain the sample points (_refine_gt_graph_samplepoints.json) and the neighbors (_refine_gt_graph.p) from the ground-truth (_gt.png) please?
Appreciate your help!

Want to know details about the network

This is my first time using GPS. The propagation distance in the topo function is set to 300 meters. I want to know if the size of my input image is 512*512 pixels, where r = 0.00300, how many pixels does the propagation distance correspond to?

Looking forward to your reply

Source code for dataloader_spacenet.py

Hi,

This is a great work, and I would like to train your model on spacenet. Can you please share the code for dataloader_spacenet.py ?

It would be perfect if the trained model on spacenet can be released. I am looking forward to hearing back from you.

Thank you very much!

Best
Yang

Interested in pseudo code of decoder

Great work you have done here, I wanted to see how you handled the distance computation for the two vertexes connected to the same edge (whether you computed distance for one edge or on both edge) but the paper didn't emphasize a lot on the implementation of the decoder, code looks rather complicated and wonder if there is some decoder pseudocode to look at?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.