Comments (8)
How do you want to deal with this exactly? What would the API changes look like?
from tensornetwork.
It might be worth considering whether we want ultimately to also support symmetric tensors, with guaranteed invariance under the action of some symmetry group on their indices. Such tensors have a "block-diagonal" structure: They have many zero elements by construction, so they can be handled more efficiently (in many cases) by storing only the non-zero blocks. Each block is an "inner" tensor with the same rank as the "outer" tensor, but with different axis dimensions.
Such tensors are not only stored differently; they also want to be treated differently when contracted with other tensors. If the other tensor is also symmetric, the contraction can be done block-wise (sparsely) for more efficiency.
Are you also imagining such cases @viathor?
from tensornetwork.
I think block-sparsity would be interesting to have, but I am not sure if it should be on the level of the API. It should probably be implemented directly into the tensor.
from tensornetwork.
@mganahl Wouldn't that mean implementing sparse tensors in the backends? I'm not sure it's possible to extend TensorFlow's tensors that way - how would you tell the ops to handle sparse tensors differently? I think you would have to modify ops like tensordot() to make it work.
from tensornetwork.
Actually, I think this is totally doable without a large code change.
Basically, the idea is to make node.tensor
an @property
method. Then, for special case tensors like CopyTensor
, you could just generate the tensor on the fly instead of storing the dense representation. We could add a default way to do this using just preexisting backend components and/or we could build specialized ways of doing it just for JAX/TF. There would be no need to modify net.contract
, and we could add special contraction methods like net.contract_copy_tensor
that will utilize the sparsity of the tensor for more efficient contraction. Best of both worlds!
Also that means we can finally deprecate get_tensor()
and bring back the "only one way" paradigm of accessing the underlying tensor.
from tensornetwork.
I moved the symmetry tensor discussion into this issue #86
from tensornetwork.
This makes a lot of sense to me. Thanks for filing #86.
One way to make contractions efficient is to reduce the number of independent indices that must be ranged over. This can be done by exploiting the fact that some tensor's inner shape is different from their outer shape. This can be used to decompose a sparse tensor into a lower rank tensor and one or more copy tensors. Contraction of the latter can be done very efficiently: a rank-n copy tensor can be contracted with all its neighbors by ranging over a single index instead of n indices.
For example, quantum CZ gate (rank-4 tensor) can be decomposed into a Z gate (rank-2 tensor) and two rank-3 copy tensors.
Note that in this approach, the backend we use for contractions doesn't need to know about the symmetries we'd like to exploit (though note that some basic features we need like einsum cannot be taken for granted, see #87).
from tensornetwork.
I believe this task is now fully supported since both node.tensor
and node.shape
are properties, thus achieving the "inner and outer shapes". Closing this issue.
from tensornetwork.
Related Issues (20)
- Flaky Test when seed not used HOT 5
- Flaky test (always fails) when seeds are removed HOT 1
- Test `test_gmres_on_larger_random_problem` fails without seeds
- symmetric backend very slow for PEPS tensors HOT 4
- Home Network
- lack sqrt operation in tensornetwork/matrixproductstates/infinite_mps.py
- TensorNetwork backend for QuTiP. HOT 2
- Bug for numpy backend ``sum`` method HOT 3
- SVD on jax backend and thus ``split_node`` cannot be jitted when ``max_truncation_err`` is set
- Bug of setting `center_position` in `apply_two_site_gate` when there's no truncation
- Quantum hardware system integration with TensorNetwork
- Missing code for TensorNetwork Machine Learning HOT 1
- `backend.item` in MPS calculation is incompatible with autograd in jax HOT 2
- The lack of tensor-train RNNs for latest tf/keras HOT 1
- Parallelism Contractors HOT 1
- Question: Vector to FiniteMPS?
- Is there a simple way to multiply a scalar to the tensor values of a node that is part of a network of tensors? HOT 2
- Maintenance of this repository HOT 3
- Tensor
- Pf
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from tensornetwork.