You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note that $(\theta_1,\theta_2)$ represents the parameters-vector while $(x)$ represents the input-vector. Hence, the above output is an affine transformation of $(x)\in \mathbb{R}^1$.
122
122
```julia
123
-
gap>input:=ConvertToExpressions( [ "theta_1", "theta_2", "x" ] );
123
+
gap>dummy_input:=CreateContextualVariables( [ "theta_1", "theta_2", "x" ] );
124
124
[ theta_1, theta_2, x ]
125
125
126
-
gap>Display( f : dummy_input :=input );
126
+
gap>Display( f : dummy_input :=dummy_input );
127
127
ℝ^1-> ℝ^1 defined by:
128
128
129
129
Underlying Object:
@@ -156,12 +156,12 @@ Note that $(\theta_1,\theta_2)$ represents the parameters-vector while $(x,y)$ r
156
156
In the following we construct the aforementioned loss-map:
gap> f :=PredictionMorphismOfNeuralNetwork( Para, input_dim, hidden_dims, output_dim, "Softmax" );;
324
+
gap> f :=NeuralNetworkPredictionMorphism( Para, input_dim, hidden_dims, output_dim, "Softmax" );;
325
325
```
326
326
327
327
As a parametrized map this neural network is defined as:
@@ -330,10 +330,10 @@ As a parametrized map this neural network is defined as:
330
330
331
331
Note that $(\theta_1,\dots,\theta_9)$ represents the parameters-vector while $(x_{1},x_{2})$ represents the input-vector. Hence, the above output is the _Softmax_ of an affine transformation of $(x_{1},x_{2})$.
@@ -494,7 +494,7 @@ Epoch 4/4 - loss = 0.0030655216725219204
494
494
Now let us use the updated theta (is the last $9$ entries) to predict the label $\in$ {_class-1_, _class-2_, _class-3_} of the point $[1,-1]\in\mathbb{R}^2$.
Copy file name to clipboardExpand all lines: doc/Doc.autodoc
+11-7Lines changed: 11 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,16 @@
1
1
@Chapter Introduction
2
2
3
-
This package provides tools for exploring categorical machine learning using the CAP (Categories, Algorithms, Programming) system.
4
-
It implements automatic differentiation using the lens pattern and provides constructs for building and training neural networks.
3
+
The GradientBasedLearningForCAP package is a computational tool for categorical machine learning within the CAP (Categories, Algorithms, Programming) framework.
4
+
It provides a categorical foundation for neural networks by modelling them as parametrised morphisms and performing computation in the category of smooth maps.
5
+
The system supports symbolic expressions and automatic differentiation via the lens pattern, enabling the bidirectional data flow required for backpropagation.
6
+
Included examples demonstrate practical applications such as finding a local minimum and training models for binary classification, multi-class classification, and linear regression, using various loss functions and optimizers including gradient descent and Adam.
7
+
This implementation is based on the paper $\href{https://arxiv.org/abs/2404.00408}{Deep~Learning~with~Parametric~Lenses}$.
5
8
6
9
@Section Overview
7
10
8
11
The package implements the following main concepts:
9
12
10
-
* **Examples**: Examples for creating and training neural networks.
13
+
* **Examples**: Examples for creating and training neural networks and computing local minima.
11
14
12
15
* **Expressions**: A symbolic expression system for representing mathematical formulas.
13
16
@@ -26,11 +29,12 @@ The package implements the following main concepts:
26
29
* **Tools**: Few GAP operations and helper functions.
27
30
28
31
29
-
@Chapter Examples for neural networks
32
+
@Chapter Examples
30
33
31
-
@Section Binary-class neural network with binary cross-entropy loss function
32
-
@Section Multi-class neural network with cross-entropy loss function
33
-
@Section Neural network with quadratic loss function
34
+
@Section Binary-Class Neural Network with Binary Cross-Entropy Loss Function
35
+
@Section Multi-Class Neural Network with Cross-Entropy Loss Function
36
+
@Section Neural Network with Quadratic Loss Function
0 commit comments