Comments (10)
related codes in my experiment
HyperFunctions_sp.txt
SSUN_only_spectral.txt
from ssun.
@Snowzm Thank you for your interest in our work. It seems that the stds in your results are relatively large. Have you checked each classification result in those 30 runs? If there exist some runs that have higher OA values than our reported mean OA, then I think it is understandable considering the instability of the network training. Honestly, we also suffered from the instability issue in the network training for the spectral classification with LSTM model. Our initial experimental setting is to conduct 10 repetitions for each algorithm but we found it hard to yield a stable performance for the LSTM model. Therefore, we further increase the repetitions to 30 runs and get the results reported in the manuscript. Nevertheless, training the spectral module with the LSTM model alone is still very unstable. This may be the future direction that we could put efforts on.
from ssun.
Thanks for you detailed response sincelely.
I think the instability may be related to the spectral samples for lstm, after all only three time vectors were fed into it. Maybe data augmentation work it.
I also notice the params of lstm is huge, have you ever try to reduce the lstm params num?
from ssun.
Your inference sounds reasonable. You can also try to train the LSTM model with a smaller learning rate and more iteration. This may help to yield more robust results.
As for the number of nodes in the network, we just ensure that both the spectral feature embedding from the LSTM and the spatial feature embedding from the CNN share the same vector length (i.e., 128 in our implementation). I guess the performance won't be influenced too much by the number of nodes. According to our experimental results, the high OA value in SSUN mainly benefits from the CNN subnetwork, while the detailed segmentation map and well-maintained object boundaries benefit from the LSTM subnetwork. Thus, changing the number of parameters in the LSTM alone may not influence the final performance of SSUN too much.
from ssun.
Pretty reply! Thank you response again. I have an other question, why do not split 20% training data into validation as for best model selection or early stop. Personaly, I think it could improve the accuracy of the test sets
from ssun.
In our experiments, we just directly follow the experimental setting in the paper Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks
, but I agree with you that splitting an individual validation set may be a better way to search for the suitable hyper-parameters. There are also some literatures adopt this strategy.
from ssun.
Thanks a lot. I mean that using valid set to save the best valid model or make some learning rates auto-schedule in the training processing. Then use the best valid model to predict the test sets. After all, trains and test sets have identity distribution.
from ssun.
Sounds interesting. I think I get what you mean. Thank you for sharing your idea.
from ssun.
labels=sio.loadmat("Indian_pines_gt.mat")['indian_pines_gt']
palette=np.array([[255,0,0],[0,255,0],[0,0,255],[255,255,0],[0,255,255],[255,0,255],[176,48,96],[46,139,87],[160,32,240],[255,127,80],[127,255,212],[218,112,214],[160,82,45],[127,255,0],[216,191,216],[238,0,0]])
palette=palette*1.0/255
X_result=np.zeros((labels.shape[0],3))
num_class=labels.max()
or i in range(0,num_class):
X_result[np.where(labels==i+1),0]=palette[i,0]
X_result[np.where(labels==i+1),1]=palette[i,1]
X_result[np.where(labels==i+1),2]=palette[i,2]
X_result=np.reshape(X_result,(145,145,3))
I am getting the following error,
ValueError: cannot reshape array of size 435 into shape (145,145,3)
from ssun.
The var X_result is the prediction to the whole image rather than the origin gt.
from ssun.
Related Issues (12)
- Hello, I want to get the code of comparative Ablation Experiment in the paper, is that ok
- Training on other datasets HOT 10
- Need help
- Which versions of keras and theano were used in this implementation? HOT 2
- Data preparation understanding HOT 2
- ValueError: cannot reshape array of size 435 into shape (145,145,3) HOT 3
- use of palette HOT 2
- use of s1s2 in HyperspectralSamples function of HyperFunctions module HOT 1
- X[:,j:j+(nb_features-1)*timestep+1:timestep]
- How did you choose the values of train_num_array for different datasets in HyperspectralSamples function HOT 2
- The `pool_size` argument must be a tuple of 2 integers. Received: (1, 2, 2) HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ssun.