Need help with tntorch?

Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

148 Stars 21 Forks GNU Lesser General Public License v3.0 236 Commits 2 Opened issues

Tensor Network Learning with PyTorch

Readme

**Read the Docs site: http://tntorch.readthedocs.io/**

Welcome to *tntorch*, a PyTorch-powered modeling and learning library using tensor networks. Such networks are unique in that they use *multilinear* neural units (instead of non-linear activation units). Features include:

- Basic and fancy
**indexing**of tensors,**broadcasting**,**assignment**, etc. - Tensor
**decomposition**and**reconstruction** - Element-wise and tensor-tensor
**arithmetics** - Building tensors from black-box functions using
**cross-approximation** - Finding global
**maxima**and**minima**from tensors -
**Statistics**and**sensitivity analysis** -
**Optimization**using autodifferentiation -
**Misc. operations**on tensors: stacking, unfolding, sampling, derivating, etc. -
**Batch operations**(work in progress)

Available tensor formats include:

- CANDECOMP/PARAFAC (CP)
- Tucker
- Tensor train (TT)
- Hybrids: CP-Tucker, TT-Tucker, etc.
- Partial support for other decompositions such as INDSCAL, CANDELINC, DEDICOM, PARATUCK2, and custom formats

For example, the following networks both represent a 4D tensor (i.e. a real function that can take I1 x I2 x I3 x I4 possible values) in the TT and TT-Tucker formats:

In *tntorch*, **all tensor decompositions share the same interface**. You can handle them in a transparent form, as if they were plain NumPy arrays or PyTorch tensors:

> import tntorch as tn > t = tn.randn(32, 32, 32, 32, ranks_tt=5) # Random 4D TT tensor of shape 32 x 32 x 32 x 32 and TT-rank 5 > print(t)4D TT tensor:

32 32 32 32 | | | | (0) (1) (2) (3) / \ / \ / \ /

1 5 5 5 1> print(tn.mean(t))

tensor(8.0388)

> print(tn.norm(t))

tensor(9632.3726)

Decompressing tensors is easy:

> print(t.torch().shape) torch.Size([32, 32, 32, 32])

Thanks to PyTorch's automatic differentiation, you can easily define all sorts of loss functions on tensors:

def loss(t): return torch.norm(t[:, 0, 10:, [3, 4]].torch()) # NumPy-like "fancy indexing" for arrays

Most importantly, loss functions can be defined on **compressed** tensors as well:

def loss(t): return tn.norm(t[:3, :3, :3, :3] - t[-3:, -3:, -3:, -3:])

Check out the introductory notebook for all the details on the basics.

- Introduction
- Active subspaces
- ANOVA decomposition
- Boolean logic
- Classification
- Cross-approximation
- Differentiation
- Discrete/weighted finite automata
- Exponential machines
- Main tensor formats available
- Other custom formats
- Polynomial chaos expansions
- Tensor arithmetics
- Tensor completion and regression
- Tensor decomposition
- Sensitivity analysis
- Vector field data

You can install *tntorch* using *pip*:

pip install tntorch

Alternatively, you can install from the source:

git clone https://github.com/rballester/tntorch.git cd tntorch pip install .

For functions that use cross-approximation, the optional package *maxvolpy* is required (it can be installed via

pip install maxvolpy).

We use *pytest*. Simply run:

cd tests/ pytest

Pull requests are welcome!

Besides using the issue tracker, feel also free to contact me at [email protected].