Code Monkey home page Code Monkey logo

tntorch's People

Contributors

aelphy avatar aiboyko avatar gngdb avatar rballester avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tntorch's Issues

numel in tutorials

Hi, there is problem with some tutorials: when you compute compression ratio you use numel instead of numcoef.

ops.mul() needs long, byte or bool

Hi All,

When I'm trying to do this simple element wise multiplication:

"""
A = tn.ones(10,10).to(torch.long)
B = tn.rand(10,10).to(torch.long)

result = tn.mul(A,B)
"""

I get an error saying that tensors used as indices must be long, byte or bool. However, it seems like the error is produced within the tn.cross function. Any ideas on how to fix this?

long, byte or bool

Thank you and cheers,
Bastien

List of values as ranks no longer supported?

Hey there, love the library! I noticed that in the documentation it shows us being able to pass in a List or other Sequential into ranks_cp and other ranks_ arguments. However, currently I'm not really able to do that.

The following works fine:

# Full is a 2D tensor [32,32]
t = tn.Tensor(full, ranks_cp=16)

But the following throws an exception:

# Full is a 2D tensor [32,32]
t = tn.Tensor(full, ranks_cp=[8, 8])
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/yuri/.local/lib/python3.10/site-packages/tntorch/tensor.py", line 151, in __init__
    assert not hasattr(ranks_cp, '__len__')
AssertionError

Is this functionality no longer supported? I'm seeing it both in the code comments as well as official documentation, so at least wanted to call out as an inconsistency.

Could you provide an example for sensitivity analysis of network?

Hi, Thank you for your work.
I have time series data of different features. This sobel analysis sounds interesting and better than the partial derivation or morris index. I'm wondering if this can also use for lstm model to see SA of different features with regard to the output. Hope you could provide an example for it. Thank you so much.
Best regards,

Default dtype float64

Hi,

I was playing around with this library in my project and kept getting some strange behaviour. It took me a while to track it down but it seems to be caused by this line;

torch.set_default_dtype(torch.float64)
https://github.com/VMML/tntorch/blob/8c81a1cbb0c5b19db7c26a787acfca35e0fbd960/tntorch/tensor.py#L4

Which is called upon importing tntorch. Is this line really necessary since it does have knock on effects when incorporating tntorch into existing torch projects?

Thanks for the great library.
Tom

Output results full with NaN

Results are all nan for some Voxels. Could you solve the problem?

ALS -- initialization time = 0.7430188655853271
iter: 0 | eps: nan | total time: 0.7723
iter: 1 | eps: nan | total time: 0.7994
iter: 2 | eps: nan | total time: 0.8267
iter: 3 | eps: nan | total time: 0.8538
iter: 4 | eps: nan | total time: 0.8811
iter: 5 | eps: nan | total time: 0.9083
iter: 6 | eps: nan | total time: 0.9337
iter: 7 | eps: nan | total time: 0.9591
iter: 8 | eps: nan | total time: 0.9845
iter: 9 | eps: nan | total time: 1.0099
iter: 10 | eps: nan | total time: 1.0353
iter: 11 | eps: nan | total time: 1.0607
iter: 12 | eps: nan | total time: 1.0860

TT decomposition error

When Irun the code in tntorch/tests/test_gpu.py of the tensor train decomposition, I get the error:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
How can I run the tt decomposition on GPU?

tntorch.metrics.dot() is not perform the same calculation as the API documents described

tntorch.metrics.dot() is not perform the same calculation as the API documents described

Example: suppose t1 has shape 3 x 4 and t2 has shape 3 x 4 x 5 x 6. Then, tn.dot(t1, t2) will have shape 5 x 6.

It wasn't going do this contraction, instead the function ouputs a Runtime Error

a=torch.randn(3,4)
b=torch.randn(3,4,5,6)
tn.metrics.dot(a,b)
Traceback (most recent call last):
File "", line 1, in
File "D:\anaconda\envs\TT-PINN\Lib\site-packages\tntorch\metrics.py", line 67, in dot
return t1.flatten().dot(t2.flatten())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: inconsistent tensor size, expected tensor [12] and src [360] to have the same number of elements, but got 12 and 360 elements respectively

Tensor indices

Hello all,

I am just starting to use the library, and I am wondering if there is a way to specify tensor indices with labels or tags, and to perform contractions according to these tags.

BR,

Rafael

Usage with Pytorch optimizers using nn.Parameter

Hi, I am using this library to define a trainable weight in my nn.Module class. However, I am unable to add it as a nn.Parameter. Thus, I'm trying to add the core tensors as a parameter in my module like this:

self.weight = tn.randn(self.n, self.in, self.out, ranks_tucker=self.rank, device='cuda', requires_grad=True)
cores = []
for c_i, core in enumerate(self.weight.cores):
    core = nn.Parameter(core)
    self.register_parameter('tucker_core_{}'.format(c_i), core)
    cores.append(core)
self.weight.cores = cores

Us = []
for u_i, u in enumerate(self.weight.Us):
    u = nn.Parameter(u)
    self.register_parameter('tucker_Us_{}'.format(u_i), u)
    Us.append(u)

self.weight.Us = Us
self.model_params = nn.ParameterList(cores + Us)

Then, I'm using Adam optimizer in the default way:

optimizer = torch.optim.Adam(model.parameters(), lr=args.lr, weight_decay=args.l2norm)

However, the network doesn't seem to learn anything. The weights doesn't update, which probably means the optimization isn't working. Do you have any ideas how to fix this? Many thanks for this nice repo!

How do I see the tensor data after decomposition?

Consider,
a = torch.randn(2, 3, 4)
b = tn.Tensor(a, ranks_cp=2)

If I want to view the values of the approximated tensor b, how do I do that? Can I get a numpy array or torch tensor like a? Since I need to perform operations like element-wise subtraction between the true tensor and approximated one.

Pls update the distrib in pypi

If you do pip install tntorch it installs an older version which crashes on tt.round() if the tt is on a cuda device.
Github version already has it fixed.

tensor compression seems to not work

The straight forward TT-decomposition of a full tensor does not work properly for me.

Minimal example:

import tntorch as tn
import torch
import numpy as np

X, Y, Z = np.meshgrid(range(128), range(128), range(128))
full = torch.Tensor(
    np.sqrt(np.sqrt(X) * (Y + Z) + Y * Z**2) * (X + np.sin(Y) * np.cos(Z))
)  # Some analytical 3D function
print(full.shape)

t = tn.Tensor(full, ranks_tt=3, requires_grad=True)  # You can also pass a list of ranks


def metrics():
    print(t)
    print(
        "Compression ratio: {}/{} = {:g}".format(
            full.numel(), t.numel(), full.numel() / t.numel()
        )
    )
    print("Relative error:", tn.relative_error(full, t))
    print("RMSE:", tn.rmse(full, t))
    print("R^2:", tn.r_squared(full, t))


metrics()

Output:

torch.Size([128, 128, 128])
3D TT tensor:

 128 128 128
  |   |   |
 (0) (1) (2)
 / \ / \ / \
1   3   3   1

Compression ratio: 2097152/2097152.0 = 1
Relative error: tensor(0.0005, grad_fn=<DivBackward0>)
RMSE: tensor(22.0728, grad_fn=<DivBackward0>)
R^2: tensor(1.0000, grad_fn=<RsubBackward1>)

The expected output would be the one given in the tutorial.
Especially, compression ratio should be $&gt;0$.

I experience this behavior both with python 3.9.6 and 3.12.2 on an M1 MacBook under macOS Sonoma 14.4.1

is it possible to implement a tensor-train neural network layer using tntorch?

It is not immediately obvious to me that it can be done just by skimming through the documentation and tutorials. I am trying to replace a fully connected layer in a neural network by a tensor-train format matrix. So I need to be able to train the tensor-train cores by backpropogation and do it on multiple GPU. Any clarifications would be appreciated. Thx.

__add__ fails when adding a literal

import tntorch as tn
import torch
p = torch.linspace(1,10000,100000).reshape(1000,100).cuda()
t=tn.Tensor(p,device=p.device)
print(t+2)

RuntimeError Traceback (most recent call last)
in ()
4 p = torch.linspace(1,10000,100000).reshape(1000,100).cuda()
5 t=tn.Tensor(p,device=p.device)
----> 6 print(t+2)

/usr/local/lib/python3.7/dist-packages/tntorch/tensor.py in add(self, other)
374 column1 = torch.cat([core1, torch.zeros([core2.shape[0], this.shape[n], core1.shape[2]], device=core1.device)], dim=0)
375 column2 = torch.cat([torch.zeros([core1.shape[0], this.shape[n], core2.shape[2]], device=core2.device), core2], dim=0)
--> 376 c = torch.cat([column1, column2], dim=2)
377 cores.append(c)
378 Us.append(None)

RuntimeError: All input tensors must be on the same device. Received cuda:0 and cpu

Is it possible to access the individual tensors in the tensor train?

Hi, thank you for this convenient package!

I would like to access the tensors in a tensor train as a list. For example,

a = torch.randn(20, 20, 20, 20) b = tnt.Tensor(a, ranks_tt=10) print(b)

produces the following result
4D TT tensor:

20 20 20 20
| | | |
(0) (1) (2) (3)
/ \ / \ / \ /
1 10 10 10 1

From b, is it possible to obtain the individual tensors as a list of dimensions [ 1x20x10, 10x20x10, 10x20x10, 10x20x1 ]?

b.torch() gives an uncompressed 20x20x20x20 tensor, which is not what I want.

Thanks!!

The code runs well on Linux, but with error on windows.

The code runs well in Linux, but with errors in windows.

Here is the code and error message:
`t = tn.rand((3,3,3,3,3,3))
print(t)

t = tn.cross(function=lambda x: x**2, tensors=[t])
print(t)`

Traceback (most recent call last):
File "C:\Users\4\PycharmProjects\TTALS\test.py", line 22, in
t = tn.cross(function=lambda x: x ** 2, tensors=[t])
File "C:\Users\4\anaconda3\lib\site-packages\tntorch\cross.py", line 261, in cross
ys_val = f([t[Xs_val].torch() for t in tensors])
File "C:\Users\4\anaconda3\lib\site-packages\tntorch\cross.py", line 261, in
ys_val = f(
[t[Xs_val].torch() for t in tensors])
File "C:\Users\4\anaconda3\lib\site-packages\tntorch\tensor.py", line 1019, in getitem
factors['index'] = get_key(counter, key[i])
File "C:\Users\4\anaconda3\lib\site-packages\tntorch\tensor.py", line 933, in get_key
return self.cores[counter][..., key, :]
IndexError: tensors used as indices must be long, byte or bool tensors

I change the "key" to 0, and the error is gone. So the problem may be the "key".
The problem only occurs on Windows.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.