feevos / resuneta Goto Github PK
View Code? Open in Web Editor NEWmxnet source code for the resuneta semantic segmentation models
License: Other
mxnet source code for the resuneta semantic segmentation models
License: Other
Although mxnet is very powerful, ask for a torch version. Don't know how to run the code, it's really bad.😎Or how to train a model using mxnet.I know this is rude, sorry!
Hey,
First, Thanks for your code and your paper!
Why did u use on last layers of ResUnet-a D6 Simple Multitasking implementation (resunet_d6_causal_mtskcolor_ddist.py) the number of classes on boundary and distance outputs?
In the first image and in the paper suggests the label is an image with a single channel. If u use self.NClasses u will use, for ur dataset, 6 filters on each layer creating images with 6 channels like segmentation.
I am missunderstand something? I am not used to mxnet.
Thank you so much for the repo! It's amazing work.
Unfortunately, I can't train the multitasking nets. Could you please share the code to train the net?
Thanks in advance
Hello again @feevos,
I am using Tanimoto dual loss with complement and it's working with simple ResUnet-a. However, when training on multitasking using Tanimoto on all tasks, only the first epoch seems to be good and then the models early stop on epoch 11. I am using some ISPRS dataset and a batch size of 1. DId u use any lr scheduler or thing that helped you to converge the training? Which batch size did u use?
Thanks for all the help.
What postprocessing steps have you used after getting model output, especially for boundary detection. Could you please share the scripts, it would be a huge help.
Hey @feevos.
I am very interested in your article "Deep learning on edge: extracting field boundaries from satellite images with a convolutional neural network". I want to learn more, but I didn’t find your code for this article. Can you please What can I learn from? I am a scientific researcher engaged in remote sensing image classification.
Hello, I am an academic novice, I don’t understand why there is no train.py and test.py
Hello, could you give me a hint or a short explanation on how to set IDs please?
How do I replicate the training procedure you followed. I have the postdam dataset available with me. How do I preprocess the dataset to feed into the model.
def hybrid_forward(self,F,_input_layer):
**x = self.BN1(_input_layer)
x = F.relu(x)**
x = self.conv1(x)
x = self.BN2(x)
x = F.relu(x)
x = self.conv2(x)
return x
accroding to the network, it seems that this part(the first Bn and ReLu layer) was calculated reapted? I mean it just need be calculated once in the head of every resunet-a block.
Hi Feevos,
I'm using your architecture, and I would like to change my image space. I'm making a test with Potsdam dataset normalized between [0-1] and I don't change masks.
So, I change ISPRSnormal.py to my own dataset, with values between 0-1, but I didn't perform well. Do I need to change something else?
I'm doing this in this dataset to learn how the insertion of this type of data in the architecture works.
Thanks!
I want to implement your work in tensorflow but get a problem.In your paper, 'the initial input
is split in channel (feature) space in 4 equal partitions', is the description of initial input for each branch of PSP Pooling. I found that it is different from PSPNet. I do not know why you split the feature map but follow the idea. Here comes the problem.
Let a feature map shape is F(batch, width, height, channel). I think your idea is to split feature map like F[b, w, h, c/4], but different from your code.
b = F.split(_a,axis=2,num_outputs=2) # Split First dimension
c1 = F.split(b[0],axis=3,num_outputs=2) # Split 2nd dimension
c2 = F.split(b[1],axis=3,num_outputs=2) # Split 2nd dimension
d11 = c1[0]
d12 = c1[1]
d21 = c2[0]
d22 = c2[1]
If I am right that data in mxnet is in shape (batch, channel, width, height), I question that why you split feature map in width and height or I miss something?
Sorry to trouble you and thanks for your work!
You mention in the ReadMe:
"Currently, we do not provide pre-trained weights (will do so in the immediate future)"
Please may you let me know when these pre-trained weights become available?
Thanks
I don't know how to restore dimension,Accodrding to PSPNet,I haven't use split,and directly pooling,then conv(1,1),upsamping,concate. lastly,the result is ok. Can you tell me how to work out the paper solution and the effect of result?
Hello, I would like to ask how you obtained the distance map as training data? Does it need to go through special pre-procesing? Can you share your experience?
Hello, I've just come into contact with academics. What I want to ask is how to train and test data without train.py.
I am trying to implement the multitasking version in keras on tf 2.0 and wanted some parameters to compare.
In the jupyter notebook 'InferenceDemo.ipynb', you mentioned the pretrained model "RESUNETA-D7-MODEL.params" .
May you share this file?
Thanks a lot!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.