@@ -17,12 +17,12 @@ The most important objects in TensorKit.jl are tensors, which we now create with
1717A = randn(ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4)
1818```
1919
20- Note that we entered the tensor size not as plain dimensions, by specifying the vector space
21- associated with these tensor indices, in this case ` ℝ^n ` , which can be obtained by typing
22- ` \bbR+TAB ` . The tensor then lives in the tensor product of the different spaces, which we
23- can obtain by typing ` ⊗ ` (i.e. ` \otimes+TAB ` ), although for simplicity also the usual
24- multiplication sign ` * ` does the job. Note also that ` A ` is printed as an instance of a
25- parametric type ` TensorMap ` , which we will discuss below and contains ` Tensor ` .
20+ Note that we entered the tensor size not as plain dimensions, but by specifying the vector
21+ space associated with these tensor indices, in this case ` ℝ^n ` , which can be obtained by
22+ typing ` \bbR+TAB ` . The tensor then lives in the tensor product of the different spaces,
23+ which we can obtain by typing ` ⊗ ` (i.e. ` \otimes+TAB ` ), although for simplicity also the
24+ usual multiplication sign ` * ` does the job. Note also that ` A ` is printed as an instance of
25+ a parametric type ` TensorMap ` , which we will discuss below and contains ` Tensor ` .
2626
2727Let us briefly sidetrack into the nature of ` ℝ^n ` :
2828
@@ -35,8 +35,8 @@ supertype(ElementarySpace)
3535```
3636
3737i.e. ` ℝ^n ` can also be created without Unicode using the longer syntax ` CartesianSpace(n) ` .
38- It is subtype of ` ElementarySpace ` , with a standard (Euclidean) inner product over the real
39- numbers. Furthermore,
38+ It is a subtype of ` ElementarySpace ` , with a standard (Euclidean) inner product over the
39+ real numbers. Furthermore,
4040
4141``` @repl tutorial
4242W = ℝ^3 ⊗ ℝ^2 ⊗ ℝ^4
@@ -57,7 +57,7 @@ B = randn(ℝ^3 * ℝ^2 * ℝ^4);
5757C = 0.5*A + 2.5*B
5858```
5959
60- Given that they are behave as vectors, they also have a scalar product and norm, which they
60+ Given that they behave as vectors, they also have a scalar product and norm, which they
6161inherit from the Euclidean inner product on the individual ` ℝ^n ` spaces:
6262
6363``` @repl tutorial
@@ -106,9 +106,11 @@ Finally, we can factorize a tensor, creating a bipartition of a subset of its in
106106its complement. With a plain Julia ` Array ` , one would apply ` permutedims ` and ` reshape ` to
107107cast the array into a matrix before applying e.g. the singular value decomposition. With
108108TensorKit.jl, one just specifies which indices go to the left (rows) and right (columns)
109+ with a tuple of tuples, selecting the respective indices for either side.
109110
110111``` @repl tutorial
111- U, S, Vd = tsvd(A, ((1,3), (2,)));
112+ A_matrix = permute(A, ((1, 3), (2,)));
113+ U, S, Vd = svd_compact(A_matrix);
112114@tensor A′[a,b,c] := U[a,c,d] * S[d,e] * Vd[e,b];
113115A ≈ A′
114116U
@@ -155,11 +157,12 @@ space(M₃)
155157
156158Note that for the construction of ` M₁ ` , in accordance with how one specifies the dimensions
157159of a matrix (e.g. ` randn(4,3) ` ), the first space is the codomain and the second the domain.
158- This is somewhat opposite to the general notation for a function ` f:domain→codomain ` , so
159- that we also support this more mathemical notation, as illustrated in the construction of
160- ` M₂ ` . However, as this is confusing from the perspective of rows and columns, we also
161- support the syntax ` codomain ← domain ` and actually use this as the default way of printing
162- ` HomSpace ` instances.
160+ This is somewhat opposite to the general notation for a function
161+ `` f : \text{domain} \rightarrow \text{codomain} `` , so that we also support this more
162+ mathematical notation, as illustrated in the construction of ` M₂ ` . However, as this is
163+ confusing from the perspective of rows and columns, we also support the syntax
164+ ` codomain ← domain ` and actually use this as the default way of printing ` HomSpace `
165+ instances.
163166
164167The 'matrix-vector' or 'matrix-matrix' product can be computed between any two ` TensorMap `
165168instances for which the domain of the first matches with the codomain of the second, e.g.
@@ -179,7 +182,7 @@ codomain(U)
179182domain(U)
180183space(U)
181184U' * U # should be the identity on the corresponding domain = codomain
182- U' * U ≈ one(U'* U)
185+ U' * U ≈ one(U' * U)
183186P = U * U' # should be a projector
184187P * P ≈ P
185188```
@@ -197,9 +200,9 @@ codomain(A2)
197200domain(A2)
198201```
199202
200- In fact, ` tsvd(A, ((1, 3), (2,))) ` is a shorthand for ` tsvd(permute(A, ((1, 3), (2,)))) ` ,
201- where ` tsvd (A::TensorMap )` will just compute the singular value decomposition according to
202- the given codomain and domain of ` A ` .
203+ In fact, this was already what we used in ` svd_compact(A_matrix) ` to create the matricized
204+ tensor ` A_matrix ` , and where ` svd_compact (A::AbstractTensorMap )` will just compute the
205+ singular value decomposition according to the given codomain and domain of ` A ` .
203206
204207Note, finally, that the ` @tensor ` macro treats all indices at the same footing and thus does
205208not distinguish between codomain and domain. The linear numbering is first all indices in
@@ -243,12 +246,12 @@ where `ℂ` is obtained as `\bbC+TAB` and we also have the non-Unicode alternati
243246
244247``` @repl tutorial
245248B = randn(ℂ^3 * ℂ^2 * ℂ^4);
246- C = im* A + (2.5 - 0.8im) * B
249+ C = im * A + (2.5 - 0.8im) * B
247250scalarBA = dot(B, A)
248251scalarAA = dot(A, A)
249252normA² = norm(A)^2
250- U, S, Vd = tsvd( A, ((1, 3), (2,)));
251- @tensor A′[a,b, c] := U[a,c, d] * S[d,e] * Vd[e,b];
253+ U, S, Vd = svd_compact(permute( A, ((1, 3), (2,) )));
254+ @tensor A′[a, b, c] := U[a, c, d] * S[d, e] * Vd[e, b];
252255A′ ≈ A
253256permute(A, ((1, 3), (2,))) ≈ U * S * Vd
254257```
@@ -321,8 +324,8 @@ convert(Array, A)
321324
322325Here, we create a 5-dimensional space ` V1 ` , which has a three-dimensional subspace
323326associated with charge 0 (the trivial irrep of `` ℤ₂ `` ) and a two-dimensional subspace with
324- charge 1 (the non-trivial irrep). Similar for ` V2 ` , where both subspaces are one-
325- dimensional. Representing the tensor as a dense ` Array ` , we see that it is zero in those
327+ charge 1 (the non-trivial irrep). Similar for ` V2 ` , where both subspaces are
328+ one- dimensional. Representing the tensor as a dense ` Array ` , we see that it is zero in those
326329regions where the charges don't add to zero (modulo 2). Of course, the ` Tensor(Map) ` type in
327330TensorKit.jl won't store these zero blocks, and only stores the non-zero information, which
328331we can recognize also in the full ` Array ` representation.
@@ -333,7 +336,7 @@ encountered in the previous examples.
333336``` @repl tutorial
334337B = randn(V1' * V1 * V2);
335338@tensor C[a,b] := A[a,c,d] * B[c,b,d]
336- U, S, V = tsvd( A, ((1, 3), (2,)));
339+ U, S, V = svd_compact(permute( A, ((1, 3), (2,) )));
337340U' * U # should be the identity on the corresponding domain = codomain
338341U' * U ≈ one(U'*U)
339342P = U * U' # should be a projector
@@ -349,7 +352,7 @@ A = randn(V * V, V)
349352dim(A)
350353convert(Array, A)
351354
352- V = Rep[U₁× ℤ₂]((0, 0) => 2, (1, 1) => 1, (-1, 0) => 1)
355+ V = Rep[U₁ × ℤ₂]((0, 0) => 2, (1, 1) => 1, (-1, 0) => 1)
353356dim(V)
354357A = randn(V * V, V)
355358dim(A)
@@ -366,12 +369,12 @@ more general sectortypes `I` it can be constructed as `Vect[I]`. Furthermore, `
366369synonyms, e.g.
367370
368371``` @repl tutorial
369- Rep[U₁](0=> 3, 1=> 2, -1=> 1) == U1Space(-1=> 1, 1=> 2, 0=> 3)
370- V = U₁Space(1=> 2, 0=> 3, -1=> 1)
372+ Rep[U₁](0 => 3, 1 => 2, -1 => 1) == U1Space(-1 => 1, 1 => 2, 0 => 3)
373+ V = U₁Space(1 => 2, 0 => 3, -1 => 1)
371374for s in sectors(V)
372375 @show s, dim(V, s)
373376end
374- U₁Space(-1=> 1, 0=> 3, 1=> 2) == GradedSpace(Irrep[U₁](1)=> 2, Irrep[U₁](0)=> 3, Irrep[U₁](-1)=> 1)
377+ U₁Space(-1 => 1, 0 => 3, 1 => 2) == GradedSpace(Irrep[U₁](1) => 2, Irrep[U₁](0) => 3, Irrep[U₁](-1) => 1)
375378supertype(GradedSpace)
376379```
377380
@@ -416,13 +419,13 @@ less obvious to recognize the dense blocks, as there are additional zeros and th
416419the original tensor data do not match with those in the ` Array ` . The reason is of course
417420that the original tensor data now needs to be transformed with a construction known as
418421fusion trees, which are made up out of the Clebsch-Gordan coefficients of the group. Indeed,
419- note that the non-zero blocks are also no longer labeled by a list of sectors, but by pairs
420- of fusion trees. This will be explained further in the manual. However, the Clebsch-Gordan
421- coefficients of the group are only needed to actually convert a tensor to an ` Array ` . For
422- working with tensors with ` SU₂Space ` indices, e.g. contracting or factorizing them, the
423- Clebsch-Gordan coefficients are never needed explicitly. Instead, recoupling relations are
424- used to symbolically manipulate the basis of fusion trees, and this only requires what is
425- known as the topological data of the group (or its representation theory).
422+ note that the non-zero subblocks are also no longer labeled by a list of sectors, but by
423+ pairs of fusion trees. This will be explained further in the manual. However, the
424+ Clebsch-Gordan coefficients of the group are only needed to actually convert a tensor to an
425+ ` Array ` . For working with tensors with ` SU₂Space ` indices, e.g. contracting or factorizing
426+ them, the Clebsch-Gordan coefficients are never needed explicitly. Instead, recoupling
427+ relations are used to symbolically manipulate the basis of fusion trees, and this only
428+ requires what is known as the topological data of the group (or its representation theory).
426429
427430In fact, this formalism extends beyond the case of group representations on vector spaces,
428431and can also deal with super vector spaces (to describe fermions) and more general (unitary)
0 commit comments