Skip to content

Commit 2e1cf6e

Browse files
authored
Merge pull request #11 from kamalsaleh/tmp
Add Example, Update README
2 parents 9f1f8e4 + 414242e commit 2e1cf6e

9 files changed

Lines changed: 181 additions & 54 deletions

File tree

PackageInfo.g

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ SetPackageInfo( rec(
1010

1111
PackageName := "GradientBasedLearningForCAP",
1212
Subtitle := "Gradient Based Learning via Category Theory",
13-
Version := "2026.01-01",
13+
Version := "2026.01-02",
1414
Date := (function ( ) if IsBound( GAPInfo.SystemEnvironment.GAP_PKG_RELEASE_DATE ) then return GAPInfo.SystemEnvironment.GAP_PKG_RELEASE_DATE; else return Concatenation( ~.Version{[ 1 .. 4 ]}, "-", ~.Version{[ 6, 7 ]}, "-01" ); fi; end)( ),
1515
License := "GPL-2.0-or-later",
1616

README.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -112,18 +112,18 @@ where the activation map applied on the output layer is the identity function _I
112112
```julia
113113
gap> input_dim := 1;; hidden_dims := [ ];; output_dim := 1;;
114114

115-
gap> f := PredictionMorphismOfNeuralNetwork( Para, input_dim, hidden_dims, output_dim, "IdFunc" );;
115+
gap> f := NeuralNetworkPredictionMorphism( Para, input_dim, hidden_dims, output_dim, "IdFunc" );;
116116
```
117117
As a parametrized map this neural network is defined as:
118118

119119
<img src="pictures/eq-1.png" alt="Image Description" width="1000" height="120">
120120

121121
Note that $(\theta_1,\theta_2)$ represents the parameters-vector while $(x)$ represents the input-vector. Hence, the above output is an affine transformation of $(x)\in \mathbb{R}^1$.
122122
```julia
123-
gap> input := ConvertToExpressions( [ "theta_1", "theta_2", "x" ] );
123+
gap> dummy_input := CreateContextualVariables( [ "theta_1", "theta_2", "x" ] );
124124
[ theta_1, theta_2, x ]
125125

126-
gap> Display( f : dummy_input := input );
126+
gap> Display( f : dummy_input := dummy_input );
127127
^1 ->^1 defined by:
128128

129129
Underlying Object:
@@ -156,12 +156,12 @@ Note that $(\theta_1,\theta_2)$ represents the parameters-vector while $(x,y)$ r
156156
In the following we construct the aforementioned loss-map:
157157

158158
```julia
159-
gap> ell := LossMorphismOfNeuralNetwork( Para, input_dim, hidden_dims, output_dim, "IdFunc" );;
159+
gap> ell := NeuralNetworkLossMorphism( Para, input_dim, hidden_dims, output_dim, "IdFunc" );;
160160

161-
gap> input := ConvertToExpressions( [ "theta_1", "theta_2", "x", "y" ] );
161+
gap> dummy_input := CreateContextualVariables( [ "theta_1", "theta_2", "x", "y" ] );
162162
[ theta_1, theta_2, x, y ]
163163

164-
gap> Display( ell : dummy_input := input );
164+
gap> Display( ell : dummy_input := dummy_input );
165165
^2 ->^1 defined by:
166166

167167
Underlying Object:
@@ -209,7 +209,7 @@ gap> theta := [ 0.1, -0.1 ];;
209209

210210
To perform _nr_epochs_ = 15 updates on $\theta\in\mathbb{R}^2$ we can use the _Fit_ operation:
211211
```julia
212-
gap> nr_epochs := 10;;
212+
gap> nr_epochs := 15;;
213213

214214
gap> theta := Fit( one_epoch_update, nr_epochs, theta );
215215
Epoch 0/15 - loss = 26.777499999999993
@@ -321,7 +321,7 @@ Its input dimension is 2 and output dimension is 3 and has no hidden layers.
321321
```julia
322322
gap> input_dim := 2;; hidden_dims := [ ];; output_dim := 3;;
323323

324-
gap> f := PredictionMorphismOfNeuralNetwork( Para, input_dim, hidden_dims, output_dim, "Softmax" );;
324+
gap> f := NeuralNetworkPredictionMorphism( Para, input_dim, hidden_dims, output_dim, "Softmax" );;
325325
```
326326

327327
As a parametrized map this neural network is defined as:
@@ -330,10 +330,10 @@ As a parametrized map this neural network is defined as:
330330

331331
Note that $(\theta_1,\dots,\theta_9)$ represents the parameters-vector while $(x_{1},x_{2})$ represents the input-vector. Hence, the above output is the _Softmax_ of an affine transformation of $(x_{1},x_{2})$.
332332
```julia
333-
gap> input := ConvertToExpressions( [ "theta_1", "theta_2", "theta_3", "theta_4", "theta_5", "theta_6", "theta_7", "theta_8", "theta_9", "x1", "x2" ] );
333+
gap> dummy_input := CreateContextualVariables( [ "theta_1", "theta_2", "theta_3", "theta_4", "theta_5", "theta_6", "theta_7", "theta_8", "theta_9", "x1", "x2" ] );
334334
[ theta_1, theta_2, theta_3, theta_4, theta_5, theta_6, theta_7, theta_8, theta_9, x1, x2 ]
335335

336-
gap> Display( f : dummy_input := input );
336+
gap> Display( f : dummy_input := dummy_input );
337337
^2 ->^3 defined by:
338338

339339
Underlying Object:
@@ -380,11 +380,11 @@ $$\text{Cross-Entropy}((z_1,z_2,z_3),(y_{1},y_{2},y_{3})) := -\frac{1}{3}\left(y
380380
In the following we construct the aforementioned loss-map:
381381

382382
```julia
383-
gap> ell := LossMorphismOfNeuralNetwork( Para, input_dim, hidden_dims, output_dim, "Softmax" );;
383+
gap> ell := NeuralNetworkLossMorphism( Para, input_dim, hidden_dims, output_dim, "Softmax" );;
384384

385-
gap> input := ConvertToExpressions( [ "theta_1", "theta_2", "theta_3", "theta_4", "theta_5", "theta_6", "theta_7", "theta_8", "theta_9", "x1", "x2", "y1", "y2", "y3" ] );
385+
gap> dummy_input := CreateContextualVariables( [ "theta_1", "theta_2", "theta_3", "theta_4", "theta_5", "theta_6", "theta_7", "theta_8", "theta_9", "x1", "x2", "y1", "y2", "y3" ] );
386386

387-
gap> Display( ell : dummy_input := input );
387+
gap> Display( ell : dummy_input := dummy_input );
388388
^5 ->^1 defined by:
389389

390390
Underlying Object:
@@ -416,7 +416,7 @@ CategoryOfLenses( SkeletalSmoothMaps )
416416

417417
gap> optimizer := Lenses.AdamOptimizer( : learning_rate := 0.01, beta_1 := 0.9, beta_2 := 0.999 );;
418418

419-
gap> optimizer( 9 )
419+
gap> optimizer( 9 );
420420
(ℝ^28, ℝ^28) -> (ℝ^9, ℝ^9) defined by:
421421

422422
Get Morphism:
@@ -433,7 +433,7 @@ Now we compute the One-Epoch-Update-Lens using the _batch size_ = 1:
433433
```julia
434434
gap> batch_size := 1;;
435435

436-
gap> one_epoch_update := OneEpochUpdateLens( ell, optimizer, D, batch_size );;
436+
gap> one_epoch_update := OneEpochUpdateLens( ell, optimizer, D, batch_size );
437437
(ℝ^28, ℝ^28) -> (ℝ^1, ℝ^0) defined by:
438438

439439
Get Morphism:
@@ -494,7 +494,7 @@ Epoch 4/4 - loss = 0.0030655216725219204
494494
Now let us use the updated theta (is the last $9$ entries) to predict the label $\in$ {_class-1_, _class-2_, _class-3_} of the point $[1,-1]\in\mathbb{R}^2$.
495495

496496
```julia
497-
gap> theta := SplitDenseList( w, [ 19, 9 ] )[2];
497+
gap> theta := w{ [ 20 .. 28 ] };
498498
[ 5.09137, -4.83379, 3.06257, -5.70976, 0.837175, -4.23622, -1.71171, 5.54301, -4.80856 ]
499499

500500
gap> theta := SkeletalSmoothMaps.Constant( theta );

doc/Doc.autodoc

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,16 @@
11
@Chapter Introduction
22

3-
This package provides tools for exploring categorical machine learning using the CAP (Categories, Algorithms, Programming) system.
4-
It implements automatic differentiation using the lens pattern and provides constructs for building and training neural networks.
3+
The GradientBasedLearningForCAP package is a computational tool for categorical machine learning within the CAP (Categories, Algorithms, Programming) framework.
4+
It provides a categorical foundation for neural networks by modelling them as parametrised morphisms and performing computation in the category of smooth maps.
5+
The system supports symbolic expressions and automatic differentiation via the lens pattern, enabling the bidirectional data flow required for backpropagation.
6+
Included examples demonstrate practical applications such as finding a local minimum and training models for binary classification, multi-class classification, and linear regression, using various loss functions and optimizers including gradient descent and Adam.
7+
This implementation is based on the paper $\href{https://arxiv.org/abs/2404.00408}{Deep~Learning~with~Parametric~Lenses}$.
58

69
@Section Overview
710

811
The package implements the following main concepts:
912

10-
* **Examples**: Examples for creating and training neural networks.
13+
* **Examples**: Examples for creating and training neural networks and computing local minima.
1114

1215
* **Expressions**: A symbolic expression system for representing mathematical formulas.
1316

@@ -26,11 +29,12 @@ The package implements the following main concepts:
2629
* **Tools**: Few GAP operations and helper functions.
2730

2831

29-
@Chapter Examples for neural networks
32+
@Chapter Examples
3033

31-
@Section Binary-class neural network with binary cross-entropy loss function
32-
@Section Multi-class neural network with cross-entropy loss function
33-
@Section Neural network with quadratic loss function
34+
@Section Binary-Class Neural Network with Binary Cross-Entropy Loss Function
35+
@Section Multi-Class Neural Network with Cross-Entropy Loss Function
36+
@Section Neural Network with Quadratic Loss Function
37+
@Section Next Local Minima
3438

3539
@Chapter Expressions
3640

Lines changed: 138 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,36 +1,155 @@
1-
LoadPackage( "GradientBasedLearningForCAP" );
1+
#! @Chapter Examples
2+
3+
#! @Section Next Local Minima
24

5+
#! In this example we demonstrate how to use the fitting machinery of
6+
#! $\texttt{GradientBasedLearningForCAP}$ to find a nearby local minimum of a smooth
7+
#! function by gradient-based optimisation.
8+
#!
9+
#! We consider the function
10+
#! @BeginLatexOnly
11+
#! \[
12+
#! f(\theta_1,\theta_2) = \sin(\theta_1)^2 + \log(\theta_2)^2,
13+
#! \]
14+
#! @EndLatexOnly
15+
#! which has local minima at the points $(\pi k, 1)$ for $k \in \mathbb{Z}$.
16+
#! We use the Adam optimiser to find a local minimum starting from an initial point.
17+
#! Hence, the parameter vector is of the form
18+
#! @BeginLatexOnly
19+
#! \[
20+
#! w = (t, m_1, m_2, v_1, v_2, \theta_1, \theta_2),
21+
#! \]
22+
#! @EndLatexOnly
23+
#! where $t$ is the time step, $m_1$ and $m_2$ are the first moment estimates for
24+
#! $\theta_1$ and $\theta_2$ respectively, and $v_1$ and $v_2$ are the second moment
25+
#! estimates for $\theta_1$ and $\theta_2$ respectively.
26+
#! We start from the initial point
27+
#! @BeginLatexOnly
28+
#! \[
29+
#! w = (1, 0, 0, 0, 0, 1.58, 0.1),
30+
#! \]
31+
#! @EndLatexOnly
32+
#! which is close to the local minimum at $(\pi, 1)$.
33+
#! After running the optimisation for $500$ epochs, we reach the point
34+
#! @BeginLatexOnly
35+
#! \[
36+
#! w = (501, -9.35215 \times 10^{-12}, 0.041779, 0.00821802, 1.5526, 3.14159, 0.980292),
37+
#! \]
38+
#! @EndLatexOnly
39+
#! where the last two components correspond to the parameters $\theta_1$ and $\theta_2$.
40+
#! Evaluating the function $f$ at this point gives us the value
41+
#! @BeginLatexOnly
42+
#! \[
43+
#! f(3.14159, 0.980292) = 0.000396202,
44+
#! \]
45+
#! @EndLatexOnly
46+
#! which is very close to $0$, the value of the function at the local minima.
47+
#! Thus, we have successfully found a local minimum using gradient-based optimisation.
48+
#! Note that during the optimisation process,
49+
#! the $\theta_1$ parameter moved from approximately $1.58$ to approximately $\pi$,
50+
#! while the $\theta_2$ parameter moved from $0.1$ to approximately $1$.
51+
#!
52+
#! @BeginLatexOnly
53+
#! \begin{center}
54+
#! \includegraphics[width=0.6\textwidth]{../examples/ComputingTheNextLocalMimima/plot-with-3-local-minimas.png}
55+
#! \end{center}
56+
#! @EndLatexOnly
357

4-
# the function f(x1,x2) = sin(x1)^2 + log(x2)^2 has local miminima at the points (πk, 1) where k ∈ ℤ
58+
LoadPackage( "GradientBasedLearningForCAP" );
559

60+
#! @Example
661
Smooth := SkeletalCategoryOfSmoothMaps( );
62+
#! SkeletalSmoothMaps
763
Lenses := CategoryOfLenses( Smooth );
64+
#! CategoryOfLenses( SkeletalSmoothMaps )
865
Para := CategoryOfParametrisedMorphisms( Smooth );
66+
#! CategoryOfParametrisedMorphisms( SkeletalSmoothMaps )
967

10-
f := PreCompose( Smooth,
68+
f_smooth := PreCompose( Smooth,
1169
DirectProductFunctorial( Smooth, [ Smooth.Sin ^ 2, Smooth.Log ^ 2 ] ),
1270
Smooth.Sum( 2 ) );
71+
#! ℝ^2 -> ℝ^1
72+
dummy_input := CreateContextualVariables( [ "theta_1", "theta_2" ] );
73+
#! [ theta_1, theta_2 ]
74+
Display( f_smooth : dummy_input := dummy_input );
75+
#! ℝ^2 -> ℝ^1
76+
#!
77+
#! ‣ Sin( theta_1 ) * Sin( theta_1 ) + Log( theta_2 ) * Log( theta_2 )
1378

1479
f := MorphismConstructor( Para,
1580
ObjectConstructor( Para, Smooth.( 0 ) ),
16-
Pair( Smooth.( 2 ), f ),
81+
Pair( Smooth.( 2 ), f_smooth ),
1782
ObjectConstructor( Para, Smooth.( 1 ) ) );
18-
83+
#! ℝ^0 -> ℝ^1 defined by:
84+
#!
85+
#! Underlying Object:
86+
#! -----------------
87+
#! ℝ^2
88+
#!
89+
#! Underlying Morphism:
90+
#! -------------------
91+
#! ℝ^2 -> ℝ^1
92+
Display( f : dummy_input := dummy_input );
93+
#! ℝ^0 -> ℝ^1 defined by:
94+
#!
95+
#! Underlying Object:
96+
#! -----------------
97+
#! ℝ^2
98+
#!
99+
#! Underlying Morphism:
100+
#! -------------------
101+
#! ℝ^2 -> ℝ^1
102+
#!
103+
#! ‣ Sin( theta_1 ) * Sin( theta_1 ) + Log( theta_2 ) * Log( theta_2 )
19104
optimizer := Lenses.AdamOptimizer( );
20-
21-
# there is only one training example in R^0 which is the trivial vector []
22-
training_examples := [ [] ];
23-
24-
# what else :)
105+
#! function( n ) ... end
106+
training_examples := [ [ ] ];
107+
#! [ [ ] ]
25108
batch_size := 1;
26-
109+
#! 1
27110
one_epoch_update := OneEpochUpdateLens( f, optimizer, training_examples, batch_size );
28-
29-
# initial value for w
111+
#! (ℝ^7, ℝ^7) -> (ℝ^1, ℝ^0) defined by:
112+
#!
113+
#! Get Morphism:
114+
#! ------------
115+
#! ℝ^7 -> ℝ^1
116+
#!
117+
#! Put Morphism:
118+
#! ------------
119+
#! ℝ^7 -> ℝ^7
120+
dummy_input := CreateContextualVariables(
121+
[ "t", "m_1", "m_2", "v_1", "v_2", "theta_1", "theta_2" ] );
122+
#! [ t, m_1, m_2, v_1, v_2, theta_1, theta_2 ]
123+
Display( one_epoch_update : dummy_input := dummy_input );
124+
#! (ℝ^7, ℝ^7) -> (ℝ^1, ℝ^0) defined by:
125+
#!
126+
#! Get Morphism:
127+
#! ------------
128+
#! ℝ^7 -> ℝ^1
129+
#!
130+
#! ‣ (Sin( theta_1 ) * Sin( theta_1 ) + Log( theta_2 ) * Log( theta_2 )) / 1 / 1
131+
#!
132+
#! Put Morphism:
133+
#! ------------
134+
#! ℝ^7 -> ℝ^7
135+
#!
136+
#! ‣ t + 1
137+
#! ‣ 0.9 * m_1 + 0.1 * (-1 * ((1 * ((1 * (Sin( theta_1 ) * Cos( theta_1 ) + Sin( theta_1 ) * Cos( theta_1 )) + 0) * 1 + 0) * 1 + 0) * 1 + 0))
138+
#! ‣ 0.9 * m_2 + 0.1 * (-1 * (0 + (0 + 1 * (0 + (0 + 1 * (Log( theta_2 ) * (1 / theta_2) + Log( theta_2 ) * (1 / theta_2))) * 1) * 1) * 1))
139+
#! ‣ 0.999 * v_1 + 0.001 * (-1 * ((1 * ((1 * (Sin( theta_1 ) * Cos( theta_1 ) + Sin( theta_1 ) * Cos( theta_1 )) + 0) * 1 + 0) * 1 + 0) * 1 + 0)) ^ 2
140+
#! ‣ 0.999 * v_2 + 0.001 * (-1 * (0 + (0 + 1 * (0 + (0 + 1 * (Log( theta_2 ) * (1 / theta_2) + Log( theta_2 ) * (1 / theta_2))) * 1) * 1) * 1)) ^ 2
141+
#! ‣ theta_1 + 0.001 / (1 - 0.999 ^ t) * ((0.9 * m_1 + 0.1 * (-1 * ((1 * ((1 * (Sin( theta_1 ) * Cos( theta_1 ) + Sin( theta_1 ) * Cos( theta_1 )) + 0) * 1 + 0) * 1 + 0) * 1 + 0))) / (1.e-0\
142+
#! 7 + Sqrt( (0.999 * v_1 + 0.001 * (-1 * ((1 * ((1 * (Sin( theta_1 ) * Cos( theta_1 ) + Sin( theta_1 ) * Cos( theta_1 )) + 0) * 1 + 0) * 1 + 0) * 1 + 0)) ^ 2) / (1 - 0.999 ^ t) )))
143+
#! ‣ theta_2 + 0.001 / (1 - 0.999 ^ t) * ((0.9 * m_2 + 0.1 * (-1 * (0 + (0 + 1 * (0 + (0 + 1 * (Log( theta_2 ) * (1 / theta_2) + Log( theta_2 ) * (1 / theta_2))) * 1) * 1) * 1))) / (1.e-07 \
144+
#! + Sqrt( (0.999 * v_2 + 0.001 * (-1 * (0 + (0 + 1 * (0 + (0 + 1 * (Log( theta_2 ) * (1 / theta_2) + Log( theta_2 ) * (1 / theta_2))) * 1) * 1) * 1)) ^ 2) / (1 - 0.999 ^ t) )))
30145
w := [ 1, 0, 0, 0, 0, 1.58, 0.1 ];
31-
32-
nr_epochs := 5000;
33-
34-
w := Fit( one_epoch_update, nr_epochs, w );
35-
36-
# after 5000 epoch the found point is [ bla bla, 3.14159, 1 ]
146+
#! [ 1, 0, 0, 0, 0, 1.58, 0.1 ]
147+
nr_epochs := 500;
148+
#! 500
149+
w := Fit( one_epoch_update, nr_epochs, w : verbose := false );
150+
#! [ 501, -9.35215e-12, 0.041779, 0.00821802, 1.5526, 3.14159, 0.980292 ]
151+
theta := w{ [ 6, 7 ] };
152+
#! [ 3.14159, 0.980292 ]
153+
Map( f_smooth )( theta );
154+
#! [ 0.000396202 ]
155+
#! @EndExample

examples/NeuralNetwork_BinaryCrossEntropy/neural_network.g

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
#! @Chapter Examples for neural networks
1+
#! @Chapter Examples
22

3-
#! @Section Binary-class neural network with binary cross-entropy loss function
3+
#! @Section Binary-Class Neural Network with Binary Cross-Entropy Loss Function
44

55
LoadPackage( "GradientBasedLearningForCAP" );
66

examples/NeuralNetwork_CrossEntropy/neural_network.g

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1-
#! @Chapter Examples for neural networks
1+
#! @Chapter Examples
22

3-
#! @Section Multi-class neural network with cross-entropy loss function
3+
#! @Section Multi-Class Neural Network with Cross-Entropy Loss Function
44

55
LoadPackage( "GradientBasedLearningForCAP" );
66

examples/NeuralNetwork_QuadraticLoss/neural_network.g

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
LoadPackage( "GradientBasedLearningForCAP" );
22

3-
#! @Chapter Examples for neural networks
3+
#! @Chapter Examples
44

5-
#! @Section Neural network with quadratic loss function
5+
#! @Section Neural Network with Quadratic Loss Function
66

77
#! This example demonstrates how to train a small feed-forward neural network
88
#! for a regression task using the $\texttt{GradientBasedLearningForCAP}$ package. We employ

gap/FitParameters.gd

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@
7373
#! \]
7474
#! @EndLatexOnly
7575
#!
76-
#! For example, if we chose the optimizer to be the gradient descent optimizer with learning rate $\eta=0.01$:
76+
#! Suppose we choose the optimizer lens to be the gradient descent optimizer with learning rate $\eta = 0.01 > 0$,
7777
#! @BeginLatexOnly
7878
#! \[
7979
#! \begin{tikzpicture}
@@ -83,13 +83,13 @@
8383
#! \node (Ap) at (-3,-1) {$\mathbb{R}^p$};
8484
#! \node (Bp) at ( 3,-1) {$\mathbb{R}^p$};
8585
#! \draw (-1.5,-1.8) rectangle (1.5,1.8);
86-
#! \draw[->] (A) -- node[above] {$\Theta \mapsto f_i(\Theta)$} (B);
86+
#! \draw[->] (A) -- node[above] {$\Theta \mapsto \Theta$} (B);
8787
#! \draw[->] (Bp) -- node[midway, below] {$\Theta + \eta g \mapsfrom (\Theta, g)$} (Ap);
8888
#! \draw[-] (-1,1) to[out=-90, in=90] (1,-1);
8989
#! \end{tikzpicture}
9090
#! \]
9191
#! @EndLatexOnly
92-
#! The resulting One-Epoch update lens for the example $X_i$ is given by:
92+
#! then the resulting One-Epoch update lens for the example $X_i$ is given by
9393
#! @BeginLatexOnly
9494
#! \[
9595
#! \begin{tikzpicture}
@@ -99,7 +99,7 @@
9999
#! \node (Ap) at (-3,-1) {$\mathbb{R}^p$};
100100
#! \node (Bp) at ( 3,-1) {$\mathbb{R}^0$};
101101
#! \draw (-1.5,-1.8) rectangle (1.5,1.8);
102-
#! \draw[->] (A) -- node[above] {$\Theta \mapsto \Theta$} (B);
102+
#! \draw[->] (A) -- node[above] {$\Theta \mapsto f_i(\Theta)$} (B);
103103
#! \draw[->] (Bp) -- node[midway, below] {$\Theta - \eta J_{f_i}(\Theta) \mapsfrom \Theta$} (Ap);
104104
#! \draw[-] (-1,1) to[out=-90, in=90] (1,-1);
105105
#! \end{tikzpicture}

0 commit comments

Comments
 (0)