Comments (13)
I will run pytorch svd on the same dataset and report any issues.
from tensorly.
I just pushed af0700a which should fix your problem.
I'll try to look into being more memory efficient for the svd-thresholding when I get some time.
Let me know if this solves the issue!
from tensorly.
Thanks a lot. I will pull the new code and will update you on the status
from tensorly.
Could you give more information about your error?
The issue might be with the PyTorch svd
, it seems there are issue when running it on large tensors..
from tensorly.
I ran the dataset with PyTorch SVD and it reproduces the same error as mentioned above. Since it is a PyTorch issue, I am closing this now.
Thanks
from tensorly.
Following up on this, in the mean time you could either convert your array to NumPy and use the Numpy backend to perform the Robust tensor PCA or we could temporarily add a function that performs the SVD only in NumPy and the rest of the computation in PyTorch.
from tensorly.
Sorry for my long delay in updating on this issue. I used the default backend of Tensorly
but I still receive the following error. I am using large volume of data on robust_pca ~million observations. Is it possible, that tensorly is trying to fit a million*million size matrix on the memory as the error was similar to PyTorch
error that I received earlier
~/anaconda/envs/test/lib/python3.6/site-packages/tensorly/decomposition/robust_decomposition.py in robust_pca(X, mask, tol, reg_E, reg_J, mu_init, mu_max, learning_rate, n_iter_max, random_state, verbose)
91
92 for i in range(T.ndim(X)):
---> 93 J[i] = fold(svd_thresholding(unfold(D, i) + unfold(L[i], i)/mu, reg_J/mu), i, X.shape)
94
95 D = L_x/mu + X - E
~/anaconda/envs/test/lib/python3.6/site-packages/tensorly/tenalg/proximal.py in svd_thresholding(matrix, threshold)
68 procrustes : procrustes operator
69 """
---> 70 U, s, V = T.partial_svd(matrix, n_eigenvecs=min(matrix.shape))
71 return T.dot(U, T.reshape(soft_thresholding(s, threshold), (-1, 1))*V)
72
~/anaconda/envs/test/lib/python3.6/site-packages/tensorly/backend/numpy_backend.py in partial_svd(matrix, n_eigenvecs)
169 if n_eigenvecs is None or n_eigenvecs >= min_dim:
170 # Default on standard SVD
--> 171 U, S, V = scipy.linalg.svd(matrix)
172 U, S, V = U[:, :n_eigenvecs], S[:n_eigenvecs], V[:n_eigenvecs, :]
173 return U, S, V
~/anaconda/envs/test/lib/python3.6/site-packages/scipy/linalg/decomp_svd.py in svd(a, full_matrices, compute_uv, overwrite_a, check_finite, lapack_driver)
127 # perform decomposition
128 u, s, v, info = gesXd(a1, compute_uv=compute_uv, lwork=lwork,
--> 129 full_matrices=full_matrices, overwrite_a=overwrite_a)
130
131 if info > 0:
MemoryError:
from tensorly.
It seems to be a memory issue -- this particular method relies on SVD.
Can you run it on a machine with more RAM?
Alternatively, we could try to change the svd-thresholding to not compute all the eigenvalues (e.g. by setting the tolerance adequately in the sparse svd).
from tensorly.
from tensorly.
@rtmatx, did it solve your issue?
from tensorly.
Just checking whether this is solved? If so, will close the issue.
from tensorly.
I am extremely sorry for late reply. The issue has been resolved with the new update.
from tensorly.
Glad to hear! :)
from tensorly.
Related Issues (20)
- Example data is missing HOT 2
- Would it be possible to do a non-negative partial Tucker factorization? HOT 8
- Error when running sparse Robust PCA HOT 5
- Optional order parameter in tl.reshape can't be used with PyTorch backend HOT 4
- Further testing for preserving tensor context with operations HOT 4
- Error encountered when using tensorly.decomposition.parafac with high rank and GPU HOT 2
- Can I impute data using Tucker or CP Decomposition for categorical data? HOT 1
- make_svd_non_negative only returns the updated U matrix HOT 2
- All nan in matrix come from non negative tucker decomposition HOT 2
- Init mode == "random" does not return the correct shape in initialize_tucker HOT 3
- It appears that partial_unfold works using sparse tensors, but it is not clear in the documentation
- Better random init of factorized tensors HOT 1
- svd_interface will throw an error if the number of rows of the matrix is smaller than it's columns HOT 1
- numpy.core._exceptions._ArrayMemoryError HOT 2
- Is there any t-product implementation code in tensorly?Thanks HOT 1
- More descriptive message when random PARAFAC2 rank is infeasible given shape HOT 1
- AssertionError: `tensorly.tt_tensor.validate_tt_rank` test HOT 1
- Randomised_CP function throws a Singular Matrix error HOT 2
- Tensor Conversion in TensorLy Does Not Preserve PyTorch Tensor dtype and device Attributes
- PARAFAC2 for missing data HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tensorly.