Code Monkey home page Code Monkey logo

tensorlearn's Introduction

TensorLearn

TensorLearn is a Python library distributed on Pypi to implement tensor learning methods.

This is a project under development. Yet, the available methods are functional. The requirment is Numpy.

Installation

Use the package manager pip to install tensorlearn in Python.

pip install tensorlearn

methods

Decomposition Methods

Tensor Operations for Tensor-Train

Tensor Operations for CANDECOMP/PARAFAC (CP)

Tensor Operations for Tucker

Tensor Operations

Matrix Operations


auto_rank_tt

tensorlearn.auto_rank_tt(tensor, epsilon)

This implementation of tensor-train decomposition determines the ranks automatically based on a given error bound according to Oseledets (2011). Therefore the user does not need to specify the ranks. Instead the user specifies an upper error bound (epsilon) which bounds the error of the decomposition. For more information and details please see the page tensor-train decomposition.

Arguments

Return

  • TT factors < list of arrays >: The list includes numpy arrays of factors (or TT cores) according to TT decomposition. Length of the list equals the dimension of the given tensor to be decomposed.

Example


cp_als_rand_init

tensorlearn.cp_als_rand_init(tensor, rank, iteration, random_seed=None)

This is an implementation of CANDECOMP/PARAFAC (CP) decomposition using alternating least squares (ALS) algorithm with random initialization of factors.

Arguments

  • tensor < array >: the given tensor to be decomposed

  • rank < int >: number of ranks

  • iterations < int >: the number of iterations of the ALS algorithm

  • random_seed < int >: the seed of random number generator for random initialization of the factor matrices

Return

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Example


tucker_hosvd

tensorlearn.tucker_hosvd(tensor, epsilon)

This implementation of Tucker decomposition determines the rank automatically based on a given error bound using HOSVD algorithm. Therefore the user does not need to specify the rank. Instead the user specifies an upper error bound (epsilon) which bounds the error of the decomposition uisng Frobenius norm.

Arguments

  • tensor < array >: The given tensor to be decomposed.

  • epsilon < float >: The error bound of decomposition in the range [0,1].

Return

  • core factor < array >: The core factor of Tucker decomposition

  • factor matrices < list of arrays >: The factor matrices

tt_to_tensor

tensorlearn.tt_to_tensor(factors)

Returns the full tensor given the TT factors

Arguments

  • factors < list of numpy arrays >: TT factors

Return

  • full tensor < numpy array >

Example


tt_compression_ratio

tensorlearn.tt_compression_ratio(factors)

Returns data compression ratio for tensor-train decompostion

Arguments

  • factors < list of numpy arrays >: TT factors

Return

  • Compression ratio < float >

Example


cp_to_tensor

Returns the full tensor given the CP factor matrices and weights

tensorlearn.cp_to_tensor(weights, factors)

Arguments

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Return

  • full tensor < array >

Example


cp_compression_ratio

Returns data compression ratio for CP- decompostion

tensorlearn.cp_compression_ratio(weights, factors)

Arguments

  • weights < array >: the vector of normalization weights (lambda) in CP decomposition

  • factors < list of arrays >: factor matrices of the CP decomposition

Return

  • Compression ratio < float >

Example


tucker_to_tensor

Returns the full tensor given the tucker factor core and factor matrices

tensorlearn.tucker_to_tensor(core_factor, factor_matrices)

Arguments

  • core_factor < array >: the core factor of the Tucker format

  • factors < list of arrays >: factor matrices of the Tucker format

Return

  • full tensor < array >

tucker_compression_ratio

Returns data compression ratio for tucker decomposition.

tensorlearn.tucker_compression_ratio(core_factor, factor_matrices)

Arguments

  • core_factor < array >: the core factor of the Tucker format

  • factors < list of arrays >: factor matrices of the Tucker format

Return

  • Compression ratio < float >

tensor_resize

tensorlearn.tensor_resize(tensor, new_shape)

This method reshapes the given tensor to a new shape. The new size must be bigger than or equal to the original shape. If the new shape results in a tensor of greater size (number of elements) the tensor fills with zeros. This works similar to numpy.ndarray.resize()

Arguments

  • tensor < array >: the given tensor

  • new_shape < tuple >: new shape

Return

  • tensor < array >: tensor with new given shape

unfold

tensorlearn.unfold(tensor, n)

Unfold the tensor with respect to dimension n.

Arguments

  • tensor < array >: tensor to be unfolded

  • n < int >: dimension based on which the tensor is unfolded

Return

  • matrix < array >: unfolded tensor with respect to dimension n

tensor_frobenius_norm

tensorlearn.tensor_frobenius_norm(tensor)

Calculates the frobenius norm of the given tensor.

Arguments

  • tensor < array >: the given tensor

Return

  • frobenius norm < float >

Example


mode_n_product

tensorlearn.mode_n_product(tensor, matrix, n)

Return product of a tensor by a matrix at mode n.

Arguments

  • tensor < array >: the given tensor

  • matrix <2D array>: the given matrix

  • n < integer >: mode of tensor

Return

  • tensor < array >: tensor product

error_truncated_svd

tensorlearn.error_truncated_svd(x, error)

This method conducts a compact svd and return sigma (error)-truncated SVD of a given matrix. This is an implementation using numpy.linalg.svd with full_matrices=False.

Arguments

  • x < 2D array >: the given matrix to be decomposed

  • error < float >: the given error (equal to the norm of the error matrix)

Return

  • r, u, s, vh < int, numpy array, numpy array, numpy array >

column_wise_kronecker

tensorlearn.column_wise_kronecker(a, b)

Returns the column wise Kronecker product (Sometimes known as Khatri Rao) of two given matrices.

Arguments

  • a,b < 2D array >: the given matrices

Return

  • column wise Kronecker product < array >

tensorlearn's People

Contributors

rmsolgi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.