Code Monkey home page Code Monkey logo

topo-boundary's People

Contributors

tonyxuqaq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

topo-boundary's Issues

Can't find cuda in docker

Hi,
Per our previous conversation over email, there should be CUDA 11.0 installed in the docker image but I can't find (other packages like Torch 1.7.0 are fine).

Here are my steps:

  1. I followed the directions to install docker and ran build_image.sh to build the docker.
  2. start the docker by sudo docker run -it zhxu_topoboundary.
  3. type nvidia-smi but command not found.

Is there any extra steps missing from my installation?

Thanks!

test Enhanced-iCurb by initial vertices generated by trained model instead of GT initial vertices

Thanks for you great work, i have trained and got a great model to generate graph by GT initial vertices, but when i tried to test it by generated initial vertices, something went wrong.In trainning data, GT initial vertices are divided into two categories:init_vertex, end_vertex, and agent begined with init_vertex.when i used all generated initial vertice to test Enhanced-iCurb,i got very bad result.Simultaneously, in init_vertices part, i could’nd know how to classify generated initial vertices into two categories.
Is there something wrong with my operation or by design?
Looking forward to your reply

Inferencing new images

Hi Zhenhua and other authors,

I was trying to run inferencing with completely new 1000 by 1000 aerial images using "iCurb". It turned out that the model required label information from the "./dataset/labels/dense_seq/xxx.json." However, most label data are provided with a Google Download link.

I wonder if there is a way to calculate all the required label information for inferencing new images. Any help would be much appreciated.

RuntimeError: cuda runtime error (710) : device-side assert triggered when training the OrientationRefine segmentation network

Hi,

Under /Topo-boundary/segmentation_based_baselines/OrientationRefine and built docker image, I run ./run_train_seg.bash and errors are following.

=======================
Start segmentation of OrientationRefine...
Device:  cuda:0
Batch size:  1
Mode:  train
=======================
Finish loading the training data set lists 10000!
Finish loading the valid data set lists 1770!
/opt/conda/lib/python3.8/site-packages/torch/nn/_reduction.py:44: UserWarning: size_average and reduce args will be deprecated, please use reduction='mean' instead.
  warnings.warn(warning.format(ret))
Epoch 1/10:   0%|                                                               | 0/10000 [00:00<?, ?img/s]/opt/conda/conda-bld/pytorch_1603729009598/work/aten/src/THCUNN/SpatialClassNLLCriterion.cu:106: cunn_SpatialClassNLLCriterion_updateOutput_kernel: block: [1,0,0], thread: [198,0,0] Assertion `t >= 0 && t < n_classes` failed.
/opt/conda/conda-bld/pytorch_1603729009598/work/aten/src/THCUNN/SpatialClassNLLCriterion.cu:106: cunn_SpatialClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [222,0,0] Assertion `t >= 0 && t < n_classes` failed.
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1603729009598/work/aten/src/THCUNN/generic/SpatialClassNLLCriterion.cu line=134 error=710 : device-side assert triggered
Epoch 1/10:   0%|                                                               | 0/10000 [00:00<?, ?img/s]
Traceback (most recent call last):
  File "train_seg.py", line 214, in <module>
    train(args,i,net,train_dataloader,train_len,optimizor,criterion,writer,valid_dataloader,valid_len)
  File "train_seg.py", line 93, in train
    loss_ori += criterion['orien_loss'](F.interpolate(pre_oris[0], scale_factor=(4,4), mode='bilinear', align_corners=True),ori_mask)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/huijie/projects/Topo-boundary/segmentation_based_baselines/OrientationRefine/utils/loss.py", line 16, in forward
    loss = self.nll_loss(log_p, targets)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 213, in forward
    return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
  File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 2266, in nll_loss
    ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: cuda runtime error (710) : device-side assert triggered at /opt/conda/conda-bld/pytorch_1603729009598/work/aten/src/THCUNN/generic/SpatialClassNLLCriterion.cu:134```

There are 65 classes including background, but the default number of classes is 64. The error happened when the class is 64. Do you have any suggestions about this problem.

Thank you! 

README clarification on RNGDet++

I saw the following in the README "You could try our latest proposed work RNGDet++, which is much more powerful and flexible than iCurb." I thought RNGDet++ is a road network extraction method and iCurb is more of a road boundary detection method. I am curious how these two can be comparable. Any helps are much appreciated.

https://raw.githubusercontent.com/TonyXuQAQ/Topo-boundary/master/README.md#:~:text=%23%23%20New%20models%0AYou%20could%20try%20our%20latest%20proposed%20work%20%5BRNGDet%2B%2B%5D(https%3A//github.com/TonyXuQAQ/RNGDetPlusPlus)%2C%20which%20is%20much%20more%20powerful%20and%20flexible%20than%20iCurb.

Can not download the dataset

When I downloaded "cropped_tiff.zip", Google always display :

Sorry, you can't view or download this file at this time.
Too many users have viewed or downloaded this file recently. Please try accessing the file again later...

could you please give me a private share so that I can download this data

NIR imagery

Hi,
Thanks so much for this awesome work. I was impressed by easy-to-use structures and performance.

I am just trying to use my own dataset to predict the curb based on Enhanced-iCurb, but seems like NIR image channel is quite important to predict initial vertices and the curbs. I tried with and without NIR channel using the data the team provided and confirmed there is a quite difference in the performance.

I am trying the same thing for the different city satellite imagery but I couldn't find the good source for the NIR imagery. Can I know how or where you obtained the NIR imagery?

Thank you so much,

reproducing orientation map

Hi Xu,

I ran into a few issues while trying to reproduce the code to generate orientation maps from binary maps. I got some preliminary results by leveraging OrientationRefine Paper's implementation but they seem to be very different than the provided orientation maps. I sensed that you might implement the conversion in a different way as I carefully looked through the "Supplementary document for Topo-Boundary."

(1) Does the actual value in the orientation map matter? The key seems to be that the pixels on boundaries that have different directions should have different values. Likewise, the pixels on boundaries that have the same direction should have the same value. The model should work, right?

(2) It was mentioned that pixel value p is the angle between the road boundary and the v axis. Is that v-axis pointing to 3/2 pi (i.e., 270 degrees)?

(3) I am not sure how abs(p) = 32Theta / pi relates to 64-class classification. Should we calculate the abs(p) first and then group them into 64 classes consequently?

I know that you mentioned someone else generated the preprocessing data but you will be releasing the data preprocessing in the near future. I wonder if you would be willing to share a snippet of the binary_map to orientation_map implementation even though they may have not been cleaned and refactored.

Any help is much appreciated. My email is

[email protected]

Dataset, Training questions

Hi, Zhenhua.
Thanks for your great works. iCurb is so interesting. I'm trying to train Enhanced iCurb on a new dataset.
and have some problems with training iCurb, and also have some questions.

1. In your supplementary document you explained all types of labels needed for training iCurb, but when I read the codes, I found out that network uses only some of the generated labels, to be more precise; it actually needs Image, Binary_map, orientation_map and initial_vertices in order to train iCurb, and to test iCurb one additional data (sampled_seq) is need to be loaded. but in the paper you had mentioned 9 type of labels as training data. what is wrong here and how do the other labels(e.g. direction_map, annotation_seq, inverse_distance, ...), will help training process?

  ./utils/dataset: in function __getitem__():
           return seq, seq_lens, tiff, mask, ori, image_name, init_points, end_points
  ./utils/eval_metric: in function APLS(name_in):
           with open(os.path.join(args.sampled_seq_dir,image_name),'rb') as jf:

2. In NYC dataset, annotation_seq data has dense points on road intersections and low density points on straight lines, is this a necessary feature for generating vertices in annotation_seq? How these vertices are generated?

Screenshot (872)

3. How many image is needed for training Enhanced iCurb?

4. In our dataset we don't have direction of each line of a road, so we assign a direction to each line based on their vertex sequence. in fact, two line on each side of a road may have same direction or may not. in this manner there is a slight change in dataset configuration. How much direction map and orientation map is importance to iCurb?

谢谢

About the initial Vertex Q in ICurb

hello, as your mentioned in ICurb paper:
During training, the initial vertex candidates Q are obtained by adding Gaussian noises to the ground-truth initial vertices, while during testing, Q is generated from the segmentation results S and H by the proposed algorithm.
but in your code, Whether it’s training or testing, the way to obtain the initial Vertex Q seems to be done directly from the data load, and it has nothing to do with the S and H branch. Did I understand it wrong? Below is the relevant code.
iCurb/main_val.py

    # =================working on an image=====================
    for i, data in enumerate(network.dataloader_valid):
        _, _, tiff, mask, name, init_points, end_points = data
        name = name[0][-14:-5]
        tiff = tiff.to(args.device)
        mask, init_points, end_points = mask[0], init_points[0], end_points[0]

Looking forward to your reply, thanks a lot.

When load './checkpoints/seg_8.pth', i got this error

Start segmentation of OrientationRefine...
Device: cuda:0
Batch size: 2
Mode: infer_train
Checkpoint ./checkpoints/seg_8.pth

Finish loading the training data set lists 10000!
Finish loading the test data set lists 10236!
Traceback (most recent call last):
File "train_seg.py", line 199, in
net.load_state_dict(torch.load(args.load_checkpoint, map_location='cpu'))
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for StackHourglassNetMTL:
size mismatch for conv1.weight: copying a param with shape torch.Size([64, 4, 7, 7]) from checkpoint, the shape in current model is torch.Size([64, 5, 7, 7]).
size mismatch for score_2.0.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([65, 128, 1, 1]).
size mismatch for score_2.0.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([65]).
size mismatch for score_2.1.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([65, 128, 1, 1]).
size mismatch for score_2.1.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([65]).
size mismatch for _score_2.0.weight: copying a param with shape torch.Size([128, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([128, 65, 1, 1]).
size mismatch for angle_decoder1_score.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([65, 128, 1, 1]).
size mismatch for angle_decoder1_score.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([65]).
size mismatch for angle_finalconv3.weight: copying a param with shape torch.Size([64, 32, 2, 2]) from checkpoint, the shape in current model is torch.Size([65, 32, 2, 2]).
size mismatch for angle_finalconv3.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([65]).

Maybe the parameter of 'task2_classes' should be 64 on line 159 of 'topoBoundary/segmentation_based_baselines/OrientationRefine/model/stack_module.py'

Question about sampled_seq

Hi Zhenhua,

Thank you very much for your work.
I run the code with a dataset which has json ground truth, and would like to evaluate APLS. The Topo-boundary code requires pickle ground truth for APLS metric. Could you please share the code to generate pickle file from json?

Thanks!

Best,
Huijie

Question about Enhance-iCurb loss

Hi,

I am training Enhance-iCurb baseline. After decreasing batch size and learning rate, the model doesn't converge.

In the first model, I changed batch_size: 64 and lr_rate: 0.00005(Default setting : batch_size: 128 and lr_rate: 0.0001. Both coor_loss and stop_loss don't converge.

In the 2nd model, I changed batch_size: 64 and lr_rate: 0.00001(Default setting : batch_size: 128 and lr_rate: 0.0001. Stop_loss doesn't converge, but the lr is very small.

image

Should I continue to decrease the learning rate to 0.000001?

Thank you!

Retraining Enhanced-iCurb is slow

Dear Xu,

I apologize for bugging you again here but would be greatly appreciated to hear your thoughts. I tried to retrain the Enhanced-iCurb model with 3-channel (RGB only) images and wanted to see the effect of removing the NIR channel.

I learned that the retraining process was very slow mainly because of the validation for 150 images after training each image. It only trained about 500 images for 72 hours (on a gtx1080ti). The validation was very slow. The following are my two questions:
(1) You might have tried your model with 3-channel images. Was it as good as the 4-channel one if you have done that? Never mind if you have not. I am happy to let you know the result if I have any.
(2) Is there a faster way to retrain the model? For example, are validating the 150 images necessary after training one image?

Thank you so much in advance.

Using own data

Hello,

i am a bit confused about how to use own aerial images.
If i save the images in my manual created cropped_tiff folder, and running e.g. init_vertex i get this error message:

Traceback (most recent call last):
  File "utils/init_vertex_extraction.py", line 38, in <module>
    image_list = os.listdir(image_dir)
FileNotFoundError: [Errno 2] No such file or directory: './records/endpoint/test'
cp: cannot stat './records/endpoint/vertices/*': No such file or directory

I created the docker image and i am inside the container by running ./build_container.bash inside the subdirectory of init_vertex.

label download

the label download url failed, could you share the new URL

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.