A tensor network is a triple
-
$\Lambda$ is a set of indices, -
${T^{(i)}_{\sigma_i}}$ is a set of tensors and the associated indices, and -
$\sigma_0$ is the output indices.
The contraction of a tensor network is defined as
We use a node to denote a tensor and a line to denote an index.
Math:
Julia:
ein"ij,jk,ki->"(A, B, C)Math:
Julia:
ein"ij,j,jk->ik"(U, S, V)- A good contraction order reduces: space complexity, time complexity and read-write complexity
- The space complexity of a contraction order is related to tree width in graph theory
- Algorithms to find the optimal contraction order, Please check: https://github.com/TensorBFS/OMEinsumContractionOrders.jl
Q1: What is the Julia code for the above diagram? $$ \begin{align} \begin{split} &{\rm con}({i,j,k,l,m,n}, \ & \quad\quad{A_{{i, l}}, B_{{l}}, C_{{k, j, l}}, D_{{k, m, n}}, E_{{j, n}}},\ & \quad\quad{i,m}) \ & =\sum_{j,k,l,n}A_{{i,l}} B_{{l}} C_{{k,l}} D_{{k, m}} E_{{j, n}}. \end{split} \end{align} $$
Q2: Given the contraction tree below, what is the corresponding time complexity, space complexity and read-write complexity?
Slicing technique can be used to reduce the space complexity of the contraction order.
A matrix product state is a tensor network representation of a quantum state.
Q: What is the rank of the above MPS?
Q: Let the virtual bond dimension be
- Example: product state
- Example: GHZ state
- Example: AKLT state (Ref. 2 P31)
- Every multipartite quantum state has a Schmidt decomposition
- Schmidt decomposition can be related to singular value decomposition (SVD).
- The entanglement entropy is defined as
- Reduced density matrix - the tensor network representation
-
The eigenvalues of the reduced density matrix are the squares of the Schmidt coefficients.
-
Schmidt decomposition
-
Systems with area law
-
Compression: How does truncation error relate to the expectation value?
- Baker–Campbell–Hausdorff (BCH) formula and Trotter decomposition
The dual of the BCH formula is the Zassenhaus formula
When
e^{t(X+Y)}=e^{tX}~e^{tY}~e^{-{\frac {t^{2}}{2}}[X,Y]}~e^{{\frac {t^{3}}{6}}(2[Y,[X,Y]]+[X,[X,Y]])}~e^{{\frac {-t^{4}}{24}}([[[X,Y],X],X]+3[[[X,Y],X],Y]+3[[[X,Y],Y],Y])}\cdots$dt$ is small, the first order Trotter decomposition is accuratee^{dt(X+Y)} \approx e^{dtX} e^{dtY} - Time-evolving block decimation (TEBD) Consider the time evolution of a local Hamiltonian
where
- Norbert Schuch: Matrix product states and tensor networks (I)
- Norbert Schuch: Matrix product states and tensor networks (II)
- Norbert Schuch - Lower bounding ground state energies through renormalization and tensor networks
- Tutorial on Tensor Networks and Quantum Computing with Miles Stoudenmire
Differentiating a tensor in a tensor network contraction is equivalent to removing the tensor.
Adjoint:
where
Differential form:
Backward rule of einsum: Consider
where
The differential form is:
We have
Q: How about complex numbers?
- Rule based AD: derive the rules using the Wirtinger calculus
- Source-to-source AD: same as real numbers
- Tensor network based simulation
- Special gates
- Expectation value
- ZX-calculus
- Optimal contraction order and treewidth
- Entanglement propagation
- Lieb Robinson bound, check entanglement entropy
- Tensor renormalization group (TRG)
- Probabilistic graphical models
- Combinatorial optimization
- Example: Spin-glass
- Example: Maximum independent set
- Example: Circuit SAT
- Is it possible to reduce spin-glass to circuit SAT?
- Generic tensor networks
- Overlap gap property
- Discuss hardest instance complexity and average complexity
- From factoring to independent set problem
Footnotes
-
Roa-Villescas, M., Gao, X., Stuijk, S., Corporaal, H., Liu, J.-G., 2024. Probabilistic Inference in the Era of Tensor Networks and Differential Programming. https://doi.org/10.48550/arXiv.2405.14060 ↩
-
Schollwöck, U., 2011. Schollwöck, U. (2011). The density-matrix renormalization group in the age of matrix product states. Annals of Physics 326, 96–192. https://doi.org/10.1016/j.aop.2010.09.012 ↩

