Code Monkey home page Code Monkey logo

block-term-tensor-regression's Introduction

Block-Term Tensor Regression (BTTR)

BTTR is a deflation-based method in which the maximally correlated representations of X and Y are extracted via ACE/ACCoS (Automatic Component Extraction / Automatic Correlated Component Selection) at each iteration. Therefore, BTTR inherits the advantages of the proposed ACE/ACCoS and does not require one to set the model parameters manually. This provides BTTR with an additional important property: the ability to model complex data in which the optimal Multilinear Rank (MTR) is not necessarily stable across sequential decompositions.

[1] Faes, Axel, Flavio Camarrone, and Marc M. Van Hulle. "Single finger trajectory prediction from intracranial brain activity using block-term tensor regression with fast and automatic component extraction." IEEE Transactions on Neural Networks and Learning Systems (2022).

block-term-tensor-regression's People

Contributors

theaxec avatar evacalvo983 avatar

Watchers

 avatar

block-term-tensor-regression's Issues

ACE brute-force search

Currently, ACE uses an exhaustive, brute-force search

for snr in SNRs:
for ratio in ratios:
tmp_core_tensor_G, tmp_components_P = modified_pstd(full_tensor_C, core_tensor_G, components_P, snr, ratio)
bic = calculateBIC(full_tensor_C, tmp_core_tensor_G, tmp_components_P)
if not optimal_bic or bic < optimal_bic:
optimal_bic = bic
optimal = (snr, ratio)
core_tensor_G_out, components_P_out = tmp_core_tensor_G, tmp_components_P

Since ACE is a rather intensive component of BTTR, maybe we can optimize by performing a better search

  • Looking for a local optimum

ACE Pruning can prune everything

The ACE Pruning:

def pruning(core_tensor_G, components_P, ratio):
r"""
At each iteration, the threshold :math:`\tau \in [0, 100]` is used to reject unnecessary components from the n-mode :math:`S^{(n)} = \{r { | } 100 (1 - \frac{\sum_i{\mathbf{G}_{(n) (r, i)}}}{ \sum_{t,i}{\mathbf{G}_{(n) (t, i)}} }) \ge \tau\}`, :math:`\mathbf{P}^{(n)} = \mathbf{P}^{(n)} (:, S^{(n)})` and :math:`\mathbf{G}^{(n)} = \mathbf{G}^{(n)} (S^{(n)}, :)`.
See the following reference for a full overview of the algorithm:
Yokota, Tatsuya, and Andrzej Cichocki. "Multilinear tensor rank estimation via sparse tucker decomposition." In 2014 Joint 7th International Conference on Soft Computing and Intelligent Systems (SCIS) and 15th International Symposium on Advanced Intelligent Systems (ISIS), pp. 478-483. IEEE, 2014.
Args:
core_tensor_G (np.ndarray): Core tensor obtained from the tucker decomposition of full_tensor_C
components_P (np.ndarray): Factor matrices obtained from the tucker decomposition of full_tensor_C
ratio (float/int): parameter for use in pruning of the components
"""
N = len(components_P)
RR = [components_P[n].shape[1] for n in range(0, N)]
components_P_out = [None] * N
for n in range(0, N):
Gm = tl.unfold(core_tensor_G, n)
gm = tl.sum(tl.abs(Gm), 1)
ids = [k for k in range(0, Gm.shape[0]) if ((1 - gm[k] / tl.sum(gm)) * 100) > ratio]
inv_ids = [k for k in range(0, Gm.shape[0]) if k not in ids]
RR[n] = len(inv_ids)
Gm = Gm[inv_ids,:]
components_P_out[n] = components_P[n][:,inv_ids]
core_tensor_G = tl.fold(Gm, n, RR)
return core_tensor_G, components_P_out

can be very aggressive, depending on the parameter "ratio" that is set.
Due to this, the code can crash:

ace.py:109: RuntimeWarning: invalid value encountered in double_scalars
ids = [k for k in range(0, Gm.shape[0]) if ((1 - gm[k] / tl.sum(gm)) * 100) > ratio]

Two solutions:

  • An exeption needs to be thrown to indicate to modified_pstd that the solution is reached
  • An exeption needs to be thrown to indicate to automatic_component_extraction that no viable solution exists.

Slow (and possibly infinite) SVD

score_vector_t = fix_numpy_vector(scipy.linalg.svd(tl.unfold(tl.tucker_to_tensor((self.X.tensor, [None] + self.components[:idx_p]), skip_factor=0, transpose_factors=True),0))[0][:,0]) # compute svd, select left [0] matrix, select first singular vector

The creation of the score vector is rather slow compared to the score_vector_matrix implementation.

In addition, in some circumstances, it's possible that SVD doesn't reach a solution.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.