Code Monkey home page Code Monkey logo

ieee_tnnls_egu-net's Introduction

Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing

Danfeng Hong, Lianru Gao, Jing Yao, Naoto Yokoya, Jocelyn Chanussot, Uta Heiden, Bing Zhang


The code in this toolbox implements the "Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing". More specifically, it is detailed as follows.

alt text

Citation

Please kindly cite the papers if this code is useful and helpful for your research.

Danfeng Hong, Lianru Gao, Jing Yao, Naoto Yokoya, Jocelyn Chanussot, Uta Heiden, Bing Zhang. Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing, IEEE Transactions on Neural Networks and Learning Systems, 2021, DOI: 10.1109/TNNLS.2021.3082289.

 @article{hong2021endmember,
  title     = {Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing},
  author    = {D. Hong and L. Gao and J. Yao and N. Yokoya and J. Chanussot and U. Heiden and B. Zhang},
  journal   = {IEEE Trans. Neural Netw. Learn. Syst.}, 
  year      = {2021},
  note      = {DOI: 10.1109/TNNLS.2021.3082289},
  publisher = {IEEE}
 }

System-specific notes

The data were generated by Matlab R2016a or higher versions, and the codes of various networks were tested in Tensorflow 1.14 version in Python 3.7 on Windows 10 machines.

How to use it?

This toolbox consists of two self-supervised unmixing network architectures, i.e., pixel-wise EGU-Net using fully-connected networks (EGU-Net-pw), spatial-spectral EGU-Net using convolutional neural networks (EGU-Net-ss). For more details, please refer to the paper.

The used data (named TNNLS_Data) for the network input, including the original hyperspectral image, the extracted endmembers (spectral bundles), and the corresponding pseudo abundances, can be downloaded from

Google Drive: https://drive.google.com/file/d/167bWkNbqYd4ZT7isDcYX0HCYesak_d4t/view?usp=sharing

Baiduyun: https://pan.baidu.com/s/17ucmeLihLWv9t8-gtW74DQ (access code: ulvh).

PS: the endmembers (spectral bundles) can be extracted from the original hyperspectral image using the provided Pseudo_endmembers_generation.m function.

More specifically, for using the proposed network, you first need to download the data from the given link and copy them to "TNNLS_Data". If you wanna run the codes in your own data, you need to first extract the endmembers using the provided Pseudo_endmembers_generation.m function and meanwhile generate the corresponding abundances. Then, you can further copy and use them in your proposed networks.

If you encounter bugs while using this code, please do not hesitate to contact us.

Licensing

Copyright (C) 2021 Danfeng Hong

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 3 of the License.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program.

Contact Information:

Danfeng Hong: [email protected]
Danfeng Hong is with the Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, 38000 Grenoble, France.
with the Remote Sensing Technology Institute (IMF), German Aerospace Center (DLR), 82234 Wessling, Germany.

ieee_tnnls_egu-net's People

Contributors

danfenghong avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

ieee_tnnls_egu-net's Issues

仿真数据集运行结果

image
您好,这是我在仿真数据集(提供的pure和mixed)上运行EGU-Net-pw.py得到的结果,为什么跑出来是损失下降了,但是准确度也跟着下降而且很低呢

How to test on my data?

Hi,
thanks for your good work!

I want to know may i test the model on my data?

look forward to your reply!

关于近似纯像元及其丰度的构造

Image
您好,关于Pseudo_endmembers_generation.m有几个问题想要请教一下:
[以仿真数据集200*200的Mixed_TrSet作为输入]

1)请问给原始数据加的是什么噪声呢

2)在每一个图像块内按照论文说的应该是用HySime提取端元,但是这里是定值5

3)在用sunsal计算丰度之前没有对8000个近似纯像元进行聚类构造端元矩阵M(224,*5)

3)在用sunsal计算时应该是近似纯像元EM(2248000)作为输入,即sunsal(M, EM,...)求对应的Trlabel(58000)

Pure和Mixed数据的作用?

在您提供的demo数据中,Pure数据集包含8000个样本,这些都来自于VCA提取的端元吗?Mixed数据集的作用是什么?

Psedo True Abundance Map Generation

First thing first thanks for sharing the code and a wonderful work indeed.
I have one query regarding the True Abundance Map Generation (TrLabel)

Abund = sunsal(EM, X', 'lambda', 0, 'ADDONE', 'no', 'POSITIVITY', 'yes', ...
'AL_iters', 200, 'TOL', 1e-4, 'verbose','yes');
TrLabel=(Abund./repmat(sum(Abund), size(Abund, 1), 1));

This will give an output of EM=8000x Channel (Pure_TrSet)
Abund here will give an output of 8000x Channel (TrLabel)

But the sample TrLabel has 8000x5

I am not really sure how the TrLabel is generated?
It will be very helpful if you can elaborate on this

Thanks once again

关于代码中的Trlabel以及Telabel

想请问一个问题,原本的代码是从每个55的像素里面提取出x个可能端元吗?为什么我的Trlabel最终结果的尺度为9560760(9560为像素个数)
Telabel代码无法生成,他是从哪来的呢?

Larger areas

Nicely done Danfeng, thanks for making this available.

Here's a question - you are using VCA as a preliminary (and presumably could use any sort of endmember extraction).

If wanting to do this over a large area this would be problematic - need to be parallelised - e.g. if you had billions of pixels (or more).

Do you think taking endmembers taken from a simpler neural network/ANN that could handle work at such a scale via patching would be a reasonable substitute?

没有定义forward_propagation函数

老师,您好,如下代码中的forward_propagation函数请问如何定义?

Forward propagation: Build the forward propagation in the tensorflow graph

### START CODE HERE ### (1 line)
z3 = forward_propagation(X, parameters)
### END CODE HERE ###

How to get Trlabel?

I understand that it has to be 8000x5
but the code in Pseudo_endmembers_generation.m line # 23
Abund = sunsal(EM, X', 'lambda', 0, 'ADDONE', 'no', 'POSITIVITY', 'yes', ...
'AL_iters', 200, 'TOL', 1e-4, 'verbose','yes');

EM here is around (8000xChannel) and X' (NxChannel) here is the data.
How using Sunsal true label is generated to be 8000x5. This part is very confusing.
The correct way should have been

Abund = sunsal(True_EM, EM, 'lambda', 0, 'ADDONE', 'no', 'POSITIVITY', 'yes', ...
'AL_iters', 200, 'TOL', 1e-4, 'verbose','yes');

Would you please clarify for me.

Regards,

Boinao

Originally posted by @Boinao in #4 (comment)

How to adjust the code for other datasets

Hello,
Thank you for sharing the code.

I need help understanding how to adjust the Pseudo_endmembers_generation.m for other datasets.
From the other issues I see you said that k is the number of patches, but it affects the resulting dimensions
Also, in this line:
[sub_EM, ind, ~] = VCA(sub_X_2d, 'Endmembers', 5, 'SNR', 30);

What does the 5 represent?
Do I set it to be a larger number than the actual number so that the algorithm finds the actual number?
If I set to the actual number of endmembers,
EM and Abund dimensions are not correct,

My dataset is as the following:
X = 156 * 9025 (Number of bands x number of pixels)
M = 156 * 3 (Number of bands x number of endmembers)
Abundance = 9025 * 3 (Number of pixels x number of endmembers

Now , what I get if I set k = 5 and set this line as: [sub_EM, ind, ~] = VCA(sub_X_2d, 'Endmembers', 3, 'SNR', 30);

I get :

Abund= 3 * 1083
EM = 156 * 1083

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.