Code Monkey home page Code Monkey logo

pytorch_spline_conv's Issues

Tutorial Segmentation Fault

Hi,

I've came across your paper and found it quite interesting.
I'm now trying to run the tutorial to verify the install of pytorch_spline_conv, but I'm getting a segmentation fault.
I am using python 3.6.9 on linux with cuda 10.1 and pytorch 1.3.1. I have verified that pytorch is working.
The segmentation fault come at line 26 in
torch_spline_conv/basis.py(26)forward()
-> basis, weight_index = op(pseudo, kernel_size, is_open_spline)

It segmentation faults at the same point regardless of whether cuda is used or not. The function that op points to is different in each case.

The cpu version of the tutorials is a straight copy from the readme and the cuda version of the tutorial is provided below

from torch_spline_conv import SplineConv

device = torch.device('cuda')

x = torch.rand((4, 2), dtype=torch.float, device=device)  # 4 nodes with 2 features each
edge_index = torch.tensor([[0, 1, 1, 2, 2, 3], [1, 0, 2, 1, 3, 2]], device=device)  # 6 edges
pseudo = torch.rand((6, 2), dtype=torch.float, device=device)  # two-dimensional edge attributes
weight = torch.rand((25, 2, 4), dtype=torch.float, device=device)  # 25 parameters for in_channels x out_channels
kernel_size = torch.tensor([5, 5], device=device)  # 5 parameters in each edge dimension
is_open_spline = torch.tensor([1, 1], dtype=torch.uint8, device=device)  # only use open B-splines
degree = 1  # B-spline degree of 1
norm = True  # Normalize output by node degree.
root_weight = None # separately weight root nodes
bias = None  # do not apply an additional bias

out = SplineConv.apply(x, edge_index, pseudo, weight, kernel_size,
                       is_open_spline, degree, norm, root_weight, bias)

print(out.size())
torch.Size([4, 4])  # 4 nodes with 4 features each

Any suggestions on how to get this working would be welcome

basis_kernel.cu install error

Hi, thanks for the awesome pytorch extensions!

I am working on a windows system, and I am installing all the requirements for your geometric extension repo. For the spline_conv extension specifically, there is an error when trying to install using pip:

cuda/basis_kernel.cu(223): error: calling a __host__ function("pow<double, long long, void> ") from a __device__ function(" const") is not allowed

The error repeats for all pow calls. For reference I am using the py3.7_cuda100_cudnn7_1 anaconda build on win64, running and installing on windows subsystem linux (wsl) ubuntu 16.04 (thus I am calling the windows python from the wsl).

I suspect that the variable types are causing the function to not overload to the cuda version. So I changed all calls of pow to powf , in the cuda/basis_kernel.cu file . I am not sure if that was the right thing to do, but it installed and so far things appears to be working (I have not done a rigorous test).

There are also a number of platform specific changes that I need to make to your other repos respective setup.py files to allow for a windows installation.

Cannot install pyg without pytorch_spline_conv in conda

Hey,
I would like to setup my enviroment via conda install pytorch-spline-conv -c pyg. This installs pytorch_spline_conv and leads to the error mentioned in #22
So far, none of the comments helped my resolve the problem.
Overwriting anything in my conda installation via pip install ... or compiling the package locally is not really an solution as this leads conda pack to fail.
This is why I would like to install pyg without pytorch_spline_conv. Is the hard dependency of pyg to pytorch_spline_conv necessary ?
Cheers!

is:issue is:open SplineConv issue with 1024 x 1024 x 3 image

I am trying to work this spatial splineConv operator on 1024 x 1024 x 3 image which converted into graph x = [1048576, 3], edge_index = [2, 4190208], edge_attr = [4190208, 2] is the transpose matrix of edge_index and pos=[1048576,2] is 2D grid cardinality in x,y plane i.e position of nodes in x-y plane.

out = SplineConv.apply(x, edge_index, pseudo, weight, kernel_size,
is_open_spline, degree, norm, root_weight, bias)
in this about function my weight is = [25 ,3, 64] with kernal size 5 and is_open_spline = torch.tensor([1, 1], dtype=torch.uint8).

when I am executing this function, I am getting an error:

image

Getting in depth understanding of the spline kernels

Hi @rusty1s,

I am a bit confused by the notation of your paper regarding the B-spline kernels. For this I am trying to reproduce the calculation of the basis functions. Hopefully, you can give me some clues.

You denote the bases as (N_{d, i}^m)_{1 \leq i \leq k_d}. Please help me with these indices. I got that m is the degree of the spline basis. d is the dimensionality of the pseudo vectors, right? But what is k_d? Is k_d = d = k_1?

And you wrote that you are using uniform knot vectors. Are these equivalent for each basis and if yes how many knot vectors do you have? Is the amount of know vectors dependent on the amount of vertices in the neighborhood?

Thank you!

Cannot import torch-spline-conv when installed from pip wheel in Torch 1.9.0

As discussed here, installing torch-spline-conv via the provided wheels fails with OSError: /lib64/libm.so.6: version 'GLIBC_2.27' not found .
We know that:

  • the issue is unaffected by the cudatoolkit version
  • the issue is unaffected by python version
  • the issue triggers only with torch 1.9.0
  • the issue triggers only on systems with CUDA acceleration.
  • recompiling solves the issue

As per @rusty1s suggestion, adding a print statement in torch_spline_conv/__init__.py between lines 10 and 11, produces the following output:

>>> import torch_spline_conv
_version
_basis
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".conda/envs/test39/lib/python3.9/site-packages/torch_spline_conv/__init__.py", line 12, in <module>
    torch.ops.load_library(importlib.machinery.PathFinder().find_spec(
  File ".conda/envs/test39/lib/python3.9/site-packages/torch/_ops.py", line 104, in load_library
    ctypes.CDLL(path)
  File ".conda/envs/test39/lib/python3.9/ctypes/__init__.py", line 382, in __init__
    self._handle = _dlopen(self._name, mode)
OSError: /lib64/libm.so.6: version `GLIBC_2.27' not found (required by .conda/envs/test39/lib/python3.9/site-packages/torch_spline_conv/_basis_cuda.so)

Meaning that the culprit is _basis_cuda, as also mentioned in the last line of the error.
Further examination using diff between the recompiled (working) package in the pytorch conda environment and the original one the test39 environment produced the following output:

diff -qr envs/test39/lib/python3.9/site-packages/torch_spline_conv envs/pytorch/lib/python3.9/site-packages/torch_spline_conv/
Files envs/test39/lib/python3.9/site-packages/torch_spline_conv/_basis_cpu.so and envs/pytorch/lib/python3.9/site-packages/torch_spline_conv/_basis_cpu.so differ
Files envs/test39/lib/python3.9/site-packages/torch_spline_conv/_basis_cuda.so and envs/pytorch/lib/python3.9/site-packages/torch_spline_conv/_basis_cuda.so differ
Files envs/test39/lib/python3.9/site-packages/torch_spline_conv/__pycache__/basis.cpython-39.pyc and envs/pytorch/lib/python3.9/site-packages/torch_spline_conv/__pycache__/basis.cpython-39.pyc differ
Files envs/test39/lib/python3.9/site-packages/torch_spline_conv/__pycache__/conv.cpython-39.pyc and envs/pytorch/lib/python3.9/site-packages/torch_spline_conv/__pycache__/conv.cpython-39.pyc differ
Files envs/test39/lib/python3.9/site-packages/torch_spline_conv/__pycache__/__init__.cpython-39.pyc and envs/pytorch/lib/python3.9/site-packages/torch_spline_conv/__pycache__/__init__.cpython-39.pyc differ
Files envs/test39/lib/python3.9/site-packages/torch_spline_conv/__pycache__/weighting.cpython-39.pyc and envs/pytorch/lib/python3.9/site-packages/torch_spline_conv/__pycache__/weighting.cpython-39.pyc differ
Files envs/test39/lib/python3.9/site-packages/torch_spline_conv/_version_cpu.so and envs/pytorch/lib/python3.9/site-packages/torch_spline_conv/_version_cpu.so differ
Files envs/test39/lib/python3.9/site-packages/torch_spline_conv/_version_cuda.so and envs/pytorch/lib/python3.9/site-packages/torch_spline_conv/_version_cuda.so differ
Files envs/test39/lib/python3.9/site-packages/torch_spline_conv/_weighting_cpu.so and envs/pytorch/lib/python3.9/site-packages/torch_spline_conv/_weighting_cpu.so differ
Files envs/test39/lib/python3.9/site-packages/torch_spline_conv/_weighting_cuda.so and envs/pytorch/lib/python3.9/site-packages/torch_spline_conv/_weighting_cuda.so differ

I don't know enough about the subject to make wild speculations, but could it be possible that the provided wheels were compiled in a different environment than the one they're supposed to work in?

CUDA libraries not generated when building pytorch-spline-conv

I cannot manage to generate CUDA libraries when building pytorch-spline-conv.
I am currently using torch-1.12.0+cu113.html under Ubuntu 18.04, python 3.7.5 and pip. I tried both installing from -f https://data.pyg.org/whll/torch-1.12.0+cu113.html and with plain pip install.
All other torch and torch-geometric packages look to work just fine. In /usr/local/lib/python3.7/dist-packages/torch_spline_conv I find only _basis_cpu.so _version_cpu.so _weighting_cpu.so, which makes init.py give an error, because of suffix = 'cuda' if torch.cuda.is_available() else 'cpu' instruction. How can I force pytorch-spline-conv to build .cuda libraries?

Thank you,

Giuliano

Performing Spline Convolution for evaluating Spline Surface

Hi,

I was wondering if this code could be used for interpolating a BSpline surface and evaluating the surface for a finer resolution. For eg. I have 4x4 control points of the BSpline surface, Can I interpolate and evaluate the surface points for 100x100 psuedo coordinates. This might be unrelated to the graphical notion of the BSplines.

Batch operation support

Hi authors, thanks for this amazing work. My question is when I try to put the model's parameter in cuda mode, the spline_conv doesn't work and inform me parameters (weights) must be in cpu mode. Also spline_conv seems doesn't support batch as the input tensor's first dimension like other NN's forwarding function. Is that true? If it is, how can we achieve data parallel on GPU with spline_conv(.)? Thanks.

Use pseudo coordinates with gradients

Hello,
Thanks a lot for this library! So in my project, I am trying to change pseudo-coordinates (stored as edge attributes) within the network while training. However, this is not working because SplineCNN doesn't accept pseudo coordinates with gradients and I get the below error message-
RuntimeError: Expected isFloatingType(grads[i].scalar_type()) to be true, but got false.

Do you think it is possible to pass pseudo-coordinates with gradients to SplineCNN?

How can I install in Google Colab?

Hi,
I tried to install in Google Colab but met this error:

Collecting torch_spline_conv
Using cached https://files.pythonhosted.org/packages/3c/dd/daa9d0b7b2ede913e573876ae286a58ec296678858f2814ff6d6789b234f/torch_spline_conv-1.1.0.tar.gz
Building wheels for collected packages: torch-spline-conv
Building wheel for torch-spline-conv (setup.py) ... error
ERROR: Failed building wheel for torch-spline-conv
Running setup.py clean for torch-spline-conv
Failed to build torch-spline-conv
Installing collected packages: torch-spline-conv
Running setup.py install for torch-spline-conv ... error
ERROR: Command "/usr/bin/python3 -u -c 'import setuptools, tokenize;file='"'"'/tmp/pip-install-xtch8f9o/torch-spline-conv/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-af37x1z8/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-xtch8f9o/torch-spline-conv/

How can I solve it?

! echo $PATH
/usr/local/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin:/usr/local/cuda/bin:/usr/local/cuda/bin

! echo $CPATH
[empty]

Question about pooling

Thanks for the implementation.
It seems like the pooling that is described as: “The pooling operation is able to obtain a coarsened graph by deriving a clustering on the graph nodes, aggregating nodes in one cluster and computing new pseudo-coordinates ...” is actually not implemented in this repository.
I'm trying to replicate this work, I'm wondering you computed the new pseudo-coordinates.

GLIBC incompatibility issue on RHEL8

When running on RHEL8, installing a torch-spline-conv cpu binary and attempting to import it causes a failure due to GLIBC 2.29. It's not clear which other wheels are affected by this issue

Minimal replication instructions

In a new venv run:
python3 -m pip install torch-spline-conv==1.2.2+pt20cpu -f https://data.pyg.org/whl/torch-2.0.1+cpu.html --force-reinstall --no-cache-dir

Navigate to site-packages/torch-spline-conv

objdump -T _basis_cpu.so | grep pow
Will show a dependency on GLIBC 2.29
(alternatively to manually inspecting .so files, install a compatible version of torch and run import torch_spline_conv to cause the error)

This is due to the optimised version of pow added in GLIBC 2.29, presumably used in the compilation of the wheel. This can be verified by running objump -T on the _basis_cpu.so produced.

Using this optimised pow (pow has been present since 2.25, just less optimised) breaks compatibility on RHEL8. Does this package not support RHEL8, or is it possible the wheels could be rebuilt with backward compatibility?

This stackoverflow question describes how it could be done -
https://stackoverflow.com/questions/77000590/referencing-stdpow-requires-glibc2-29

Using spline_conv to approximate gradients on a graph

Hi,
I was wondering if it is possible to manually calibrate the BSpline weights in order to replicate a Sobel-like kernel. My goal would be to have a kernel that when applied to every node feature returns the feature gradient (along a specified direction) at the node approximated as a convolution with the same feature on neighbors.
If I understand correctly, it should in principle be possible to compute gradients on a graph in this way, but I think one needs to sort every node's neighbors in a consistent manner.
Do you think this can be achieved with a spline_conv, or should I try to implement it from scratch?

Many thanks.

Question: Similiar to Receptive Field

Hello. I'm fairly new to ML, AI, and Pytorch so I'm having some difficulty trying to visualize which individual nodes in the original graph affect the convoluted node in the new graph after convolution. So for example, I want to know which nodes in the original graph G affect the first node in the G' (graph after convolution) and which affects the second nodes, etc.. Thank you so much for your assistance.

graph preprocessing for SplineConv layer

Hello, I am not quite sure how graphs shall be preprocessed for the SplineConv layer. My dataset contains graphs that each with different number of vertices, can they be directly fed into the layer after padding or they have to be preprocessed so that they have the same number of vertices? If only padding is needed, shall I do padding across the whole dataset or within batches?
Also, may I ask if there is any blogs/tutorials you recommend for graph preprocessing in spatial graph convolution?
Thanks a lot in advance!

Usage questions

@rusty1s
I am currently trying to utilize this convolution operation for a project I am working on and I have a quick question about the implementation.

In your paper you state:

"We scale the spatial relation vectors u(i, j) to exactly match this interval, c.f . Figure 3."

I am assuming this means that no scaling of the pseudo coordinates needs be done during preprocessing as this is handled by your basis/weighting. Is this correct?

Thanks for sharing!

ImportError occurs in Google Colab

Hi. I'm a Google Colab user.
For the sake of using code from here, I have to install SplineConv.

First I checked the versions in Google Colab:

!python --version
print("Torch version:{}".format(torch.__version__))
print("cuda version: {}".format(torch.version.cuda))

It says

Python 3.17.13
Torch version: 1.10.0+cu111
cuda version: 11.1

So I tried to download SplineConv as:

!pip install torch-spline-conv -f https://data.pyg.org/whl/torch-1.10.0+cu111.html

And from https://data.pyg.org/whl/torch-1.10.0%2Bcu113/torch_spline_conv-1.2.1-cp37-cp37m-linux_x86_64.whl , torch-spline-conv-1.2.1 is successfully installed.

But there occurs ImportError: 'SplineConv' requires 'torch-spline-conv'.

What should I do?

Best regard,

Spline filter

Hi!
Is the B-Spline convolution kernels defined per each input features separately (convolution filter for each input feature) or the convolution filter is defined for all the graph nodes and it is used to convolve all the graph features?

Possibility of creating a tutorial?

Hello. Thank you so much for giving the code. This issue is not a coding/bug thread. I was hoping there was a tutorial I could use for understanding your paper better. Is there any plan for making one?

Installation issues

Dear Matthias,
I have issues with the installation of the package. I used your instructions, for PyTorch 1.4.0 and Cuda 10.0. The installation runs smoothly, double checked the Torch and cuda version for the installation.

However, I can import the package, but it seems to be empty. It contains version, needed Cuda version etc. but no meat. No error message or anything from the plain import, but an initialization of e.g. SplineConv from PyTorch_Geometric exposes the missing classes. This seems really weird overall.

Has this occurred in before? I reinstalled multiple times, and used different sources too. Nothing seems to work.

Thank you very much in advance!

Only datatype float excepted

While implementing a splineConv nn, I got some problems with implementing it. It seems that spline_weighting requires a tensor of type float for positions and the edge_attr. Is there a way to support also double tensors?

CUDA error: an illegal memory access

Running a Torch 2.0.1 environment (tried with both CUDA 11.7 and 11.8) and PyTorch Lightning. A fairly simple model fails during backprop and throws:

RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

This same model also fails if used on the CPU but throws no error, just quietly exits at the first backprop stage.

I'm on a Windows x86 machine.

Plotting Learned Filters

Hi, I was trying to plot my model's learned SplineConv filters but was having some trouble with the implementation. My idea was to replicate the images presented in the README. It would be of great help if the code used to generate those plots (or to generate 2d representations of the filters) was uploaded so as to have a better interpretation of the models.

Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.