c-hofer / torchph Goto Github PK
View Code? Open in Web Editor NEWThe essence of my research, distilled for reusability. Enjoy ๐ฅ!
Home Page: https://c-hofer.github.io/torchph/
License: MIT License
The essence of my research, distilled for reusability. Enjoy ๐ฅ!
Home Page: https://c-hofer.github.io/torchph/
License: MIT License
Planning to rename chofer_torchex as torchph for easier use;
So far we have to build docs locally and push them to gh-pages.
Wanted new behavior:
docs are updated automatically with each push of master
there remains a possobility to build docs locally in some gitignored folder for testing befor pushing
@rkwitt Research if u want to. i'll do the coding.
At the moment there are several warnings when compiling the cuda stuff. This should be investigated and removed.
Currently for sphinx we need to import all modules which fails if no nvcc is available as cuda extansions won't compile.
This dependency should be removed.
Hi, thanks for your work! But i want to know if i can count persistence homological parallel in gpu through torchph?
eg:
n pointclouds size:[batch,pointcloud_number,dim]
to
n persistence homologicals size:[batch,persistence_barcode]
Respecting the help and the answer that you give
Currently the vr computation returns the distance between to vertices as a death time, it is not obvious if this is expected or not.
Torch.mul behaviour is changed. Adapt to new behaviour for element wise multiplication.
When trying to make the pershom_cpp_src file, there is a reference to a 'pershom.cpp' that does not exist.
Related error:
make: *** No rule to make target 'pershom.cpp', needed by 'pershom.o'. Stop.
Hello, I have a question about using SLayer. Could you please clarify why isn't it possible to differentiate w.r.t. its input?
Thank you
Utils needs to be refactored to pytorch_utils repository and removed from chofer_torchex
Currently all kernels operate on default stream. This makes the pytorch stream context manager useless and leads to crashes.
Hi, thank you so much for sharing this useful library! I am wondering how to use it for handling batch input. For example, when I want to calculate the vr persistence l1 features of a batch with "BxNx3", one simple way is to use your "vr_persistence_l1" function with a for loop, which will cost a lot of time (B times compared with 1 input). Do you have any suggestion for this scenario?
Thanks for your work!
We need to refactor the vr computation such that distance matrices can be directly passed into the pipeline.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.