Code Monkey home page Code Monkey logo

imageseg-2.5d_topo's People

Contributors

huxiaoling avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

imageseg-2.5d_topo's Issues

Python doesn't recognize .so file

Hi,
I tried to use your example, but it seems that PersistencePython.so isn't recognized:
image

Maybe you know how to import this .so file?

Thanks!

有关于训练时图像添加黑色边框的相关问题

论文中提及为了让网络学习到更有意义的拓扑结构,在mask的周围添加了一圈黑色的边框(应该指的就是前景类的边框是么?)这一部分内容在这份代码里似乎并没有体现?其次,是否需要对原图也添加这样的边框呢?在betti number计算的代码中,我看到您在周围添加了一圈黑色边框,我不理解为什么在测试过程中计算betti error的时候也需要这样做

The scope of application of topological loss

I am currently using pix2pix to deal with the problem of image segmentation. I want to add the topological loss to the loss of the generator. Does this apply to such a problem?

Use for binary classfication

Hi, @HuXiaoling
I got some problems about the use for binary classification.

  1. Does SLICES_COLLECT means how many class in the classification?
    https://github.com/HuXiaoling/imageSeg-3D_topo/blob/01464f17682a2a14f938116d4724dc9002375e8f/modules.py#L35
  2. If I have a task for binary classfication, it seems that I need to use this line to calculate the topoloss?
    https://github.com/HuXiaoling/imageSeg-3D_topo/blob/01464f17682a2a14f938116d4724dc9002375e8f/modules.py#L140
  3. But there is only one channel in my last convolutional layer followed by a sigmoid, I calcualte the nn.BCELoss() for my model. Could you give me a example for this to calculate the topoloss?
    Thank you in advance!!

Unsatisfied output

batch
Hello, I use a simple image to test the code as follows,
persistence_result = cubePers(np.reshape(f_padded, f_padded.size).tolist(), list(f_padded.shape), 0.001)
And I find the output (only one 1-dimension topology) is not the same as gudhi.CubicalComplex (output 3 1-dimension topology). Are there any bugs in this repository?

By the way, I wonder the meaning of critical points, can you please explain critical points and topology loss can be calculated without critical points?

Look forward to your reply.
Thank you!

Have problem in reading topoloss.py

Hello, thank you for your works!
can you write some comments on the Topoloss.py file? Since I am not very familiar with the calculation of PH, I hope you can write some notes.

error in computing compute_persistence_2DImg_1DHom_lh

When training, what's wrong with errors like that?

not scape per: 0.015384615384615385 loss_topo tensor(0., device='cuda:0', grad_fn=)
CE: 0.0009232205338776112 Topo: 0.0
row: 0 col: 0
torch.Size([65, 65])
dimension: 2
0 4761
1 9384
2 4624
reduced dimension 2
saved dimension 2
reduced dimension 1
saved dimension 1
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Filtration building time = 0 Min
Reduction time = 0 Min
@@@@@@@@@@@@@@@@@@@@@@@@@@@@
All calculations computed in 0
Number of errors = 9384
If error happens, please contact the author.
(0,)
File "topoLoss.py", line 47, in getPers
pd_lh, bcp_lh, dcp_lh = compute_persistence_2DImg_1DHom_lh(likelihood)
File "TDFMain_pytorch.py", line 49, in compute_persistence_2DImg_1DHom_lh
dgm = persistence_result_filtered[:, 1:3]
IndexError: too many indices for array

Can graph use PH to compute the betti number?

Hello, Huxiaoling
Thank you for your works! This is a very innovative article.
My subject is cortical segmentation of healthy brains, and plan to use peresistent homoly to optimize it.
Your input is a picture, but my input is a graph (trisurf composed of vertex and triangle) in graph theory. I think you mentioned that using TopoLoss should be derived from the predicted likelihood map and GT to obtain the betti number using PH. What is the form of your likelihood map? The soft segmentation I get now is (10242, 36) the probability that each node (10242) belongs to each category (36). This form of TDA? Or use trisurf? And my GT is a one-hot form of (10242, 36), can PH be used in this form?

Secondly, for graph instead of image, what steps should be followed to get the betti number? I have consulted some documents, and I need to filter first, then how to set the filter value here? How big is a perfect threshold? If you don’t mind, I uploaded some pictures on github(https://github.com/tanjia123456/Brain), please help me to see if I can use PH for a post-processing

Finally, can you please share the following ideas or source code for your tploloss? I will only use it for academic research.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.