Code Monkey home page Code Monkey logo

Comments (3)

Pseudomanifold avatar Pseudomanifold commented on May 17, 2024

Dear Zexian,

Thanks for your kind words! Unfortunately, there are no quick routines available for building such graphs/simplicial complexes at the moment. What you could try is use something like nglpy to obtain faster 1-skeletons, then expand them by yourself.

What you are mentioning is indeed a relevant use case, and I have thought about additional measures. One thing I would like to eventually integrate into pytorch-topological is this approach on distributed topology. Maybe you can find some inspiration in there. I am very happy to assist you in integrating this into the repository here!

Hope that helps!

from pytorch-topological.

zexhuang avatar zexhuang commented on May 17, 2024

Hi Bastian,

Thanks you so much for your responses and suggestions, I will definitely look into those resources.

Since I am working with n-dim point clouds, hence projecting the edge features from Persistent Digrams (PDs) back to the edges themselves (unlike the ring graph examples in your TOGL and GIF papers) is not necessary.

If the above statement is true, then I only care about the 0-dim features (connected components) of point clouds in PDs. If I were to apply a Vietoris-Rips complexes (which is way much faster than the Tree Simplex construction, computationally speaking) to the learned filtrations of point clouds (via any set-, graph-based learning layer), would the gradients still exist?

I believe that the purpose of applying Tree Simplex construction in the sample code of TOGL is to have an injective mapping from PD features back to the corresponding nodes, so that the gradients would exist for the learnable filtration functions. But I doubt that the above statement holds true for Vietoris-Rips complexes as it mainly considers the some distance metrics (let say, a simple Euclidean metric) of input points.

Regards,
Zexian.

from pytorch-topological.

Pseudomanifold avatar Pseudomanifold commented on May 17, 2024

Dear Zexian,

Yes, if you are only interested in 0D features anyway, I'd go for Vietoris--Rips complexes, with a maximum dimension of 1. You will still get gradients with that approach (in fact, this follows our Topological Autoencoders framework nicely; the framework is also implemented in pytorch-topological now).

This is still differentiable because distances are differentiable—in the sense that you can change point positions accordingly.

Hope that helps!

from pytorch-topological.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.