Skip to content

Commit 81c93cb

Browse files
committed
update week 2
1 parent a0d5079 commit 81c93cb

7 files changed

Lines changed: 162 additions & 349 deletions

File tree

doc/pub/week2/html/week2-bs.html

Lines changed: 0 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -148,10 +148,6 @@
148148
None,
149149
'setting-up-the-back-propagation-algorithm-part-3'),
150150
('Updating the gradients', 2, None, 'updating-the-gradients'),
151-
('Example to the calculation of gradient',
152-
2,
153-
None,
154-
'example-to-the-calculation-of-gradient'),
155151
('Fine-tuning neural network hyperparameters',
156152
2,
157153
None,
@@ -301,7 +297,6 @@
301297
<!-- navigation toc: --> <li><a href="#setting-up-the-back-propagation-algorithm-part-2" style="font-size: 80%;"><b>Setting up the back propagation algorithm, part 2</b></a></li>
302298
<!-- navigation toc: --> <li><a href="#setting-up-the-back-propagation-algorithm-part-3" style="font-size: 80%;"><b>Setting up the Back propagation algorithm, part 3</b></a></li>
303299
<!-- navigation toc: --> <li><a href="#updating-the-gradients" style="font-size: 80%;"><b>Updating the gradients</b></a></li>
304-
<!-- navigation toc: --> <li><a href="#example-to-the-calculation-of-gradient" style="font-size: 80%;"><b>Example to the calculation of gradient</b></a></li>
305300
<!-- navigation toc: --> <li><a href="#fine-tuning-neural-network-hyperparameters" style="font-size: 80%;"><b>Fine-tuning neural network hyperparameters</b></a></li>
306301
<!-- navigation toc: --> <li><a href="#hidden-layers" style="font-size: 80%;"><b>Hidden layers</b></a></li>
307302
<!-- navigation toc: --> <li><a href="#which-activation-function-should-i-use" style="font-size: 80%;"><b>Which activation function should I use?</b></a></li>
@@ -942,27 +937,6 @@ <h2 id="updating-the-gradients" class="anchor">Updating the gradients </h2>
942937
$$
943938

944939

945-
<!-- !split -->
946-
<h2 id="example-to-the-calculation-of-gradient" class="anchor">Example to the calculation of gradient </h2>
947-
948-
<p>Consider a simple NN in which the inputs \( \boldsymbol{x} \), the weights
949-
\( \boldsymbol{W} \), the biases \( \boldsymbol{b} \) and the ouputs
950-
\( \boldsymbol{\tilde{y}}=f(\boldsymbol{x};\boldsymbol{\Theta}) \) are just scalars and that we
951-
have two layers only, that is the output layer is labeled with \( L=2 \).
952-
</p>
953-
954-
<p>Our output is then (no boldfaced symbols since all quantities are scalars)</p>
955-
$$
956-
\tilde{y}=f(x;Theta))=\sigma_{L=2}(w_2\sigma_1(w_1x+b_1)+b_2).
957-
$$
958-
959-
<p>For the back-propagation algorithm we will need various partial derivatives. One of these is</p>
960-
$$
961-
\frac{\partial f(x;\Theta)}{\partial w_1}=
962-
$$
963-
964-
<!-- \sigma_2(w_2\sigma_1(w_1x+b_1)+b_2)\times w_2\sigma_1(w_1x+b_1)x. -->
965-
966940
<!-- !split -->
967941
<h2 id="fine-tuning-neural-network-hyperparameters" class="anchor">Fine-tuning neural network hyperparameters </h2>
968942

doc/pub/week2/html/week2-reveal.html

Lines changed: 0 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -860,32 +860,6 @@ <h2 id="updating-the-gradients">Updating the gradients </h2>
860860
<p>&nbsp;<br>
861861
</section>
862862

863-
<section>
864-
<h2 id="example-to-the-calculation-of-gradient">Example to the calculation of gradient </h2>
865-
866-
<p>Consider a simple NN in which the inputs \( \boldsymbol{x} \), the weights
867-
\( \boldsymbol{W} \), the biases \( \boldsymbol{b} \) and the ouputs
868-
\( \boldsymbol{\tilde{y}}=f(\boldsymbol{x};\boldsymbol{\Theta}) \) are just scalars and that we
869-
have two layers only, that is the output layer is labeled with \( L=2 \).
870-
</p>
871-
872-
<p>Our output is then (no boldfaced symbols since all quantities are scalars)</p>
873-
<p>&nbsp;<br>
874-
$$
875-
\tilde{y}=f(x;Theta))=\sigma_{L=2}(w_2\sigma_1(w_1x+b_1)+b_2).
876-
$$
877-
<p>&nbsp;<br>
878-
879-
<p>For the back-propagation algorithm we will need various partial derivatives. One of these is</p>
880-
<p>&nbsp;<br>
881-
$$
882-
\frac{\partial f(x;\Theta)}{\partial w_1}=
883-
$$
884-
<p>&nbsp;<br>
885-
886-
<!-- \sigma_2(w_2\sigma_1(w_1x+b_1)+b_2)\times w_2\sigma_1(w_1x+b_1)x. -->
887-
</section>
888-
889863
<section>
890864
<h2 id="fine-tuning-neural-network-hyperparameters">Fine-tuning neural network hyperparameters </h2>
891865

doc/pub/week2/html/week2-solarized.html

Lines changed: 0 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -175,10 +175,6 @@
175175
None,
176176
'setting-up-the-back-propagation-algorithm-part-3'),
177177
('Updating the gradients', 2, None, 'updating-the-gradients'),
178-
('Example to the calculation of gradient',
179-
2,
180-
None,
181-
'example-to-the-calculation-of-gradient'),
182178
('Fine-tuning neural network hyperparameters',
183179
2,
184180
None,
@@ -870,27 +866,6 @@ <h2 id="updating-the-gradients">Updating the gradients </h2>
870866
$$
871867

872868

873-
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
874-
<h2 id="example-to-the-calculation-of-gradient">Example to the calculation of gradient </h2>
875-
876-
<p>Consider a simple NN in which the inputs \( \boldsymbol{x} \), the weights
877-
\( \boldsymbol{W} \), the biases \( \boldsymbol{b} \) and the ouputs
878-
\( \boldsymbol{\tilde{y}}=f(\boldsymbol{x};\boldsymbol{\Theta}) \) are just scalars and that we
879-
have two layers only, that is the output layer is labeled with \( L=2 \).
880-
</p>
881-
882-
<p>Our output is then (no boldfaced symbols since all quantities are scalars)</p>
883-
$$
884-
\tilde{y}=f(x;Theta))=\sigma_{L=2}(w_2\sigma_1(w_1x+b_1)+b_2).
885-
$$
886-
887-
<p>For the back-propagation algorithm we will need various partial derivatives. One of these is</p>
888-
$$
889-
\frac{\partial f(x;\Theta)}{\partial w_1}=
890-
$$
891-
892-
<!-- \sigma_2(w_2\sigma_1(w_1x+b_1)+b_2)\times w_2\sigma_1(w_1x+b_1)x. -->
893-
894869
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
895870
<h2 id="fine-tuning-neural-network-hyperparameters">Fine-tuning neural network hyperparameters </h2>
896871

doc/pub/week2/html/week2.html

Lines changed: 0 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -252,10 +252,6 @@
252252
None,
253253
'setting-up-the-back-propagation-algorithm-part-3'),
254254
('Updating the gradients', 2, None, 'updating-the-gradients'),
255-
('Example to the calculation of gradient',
256-
2,
257-
None,
258-
'example-to-the-calculation-of-gradient'),
259255
('Fine-tuning neural network hyperparameters',
260256
2,
261257
None,
@@ -947,27 +943,6 @@ <h2 id="updating-the-gradients">Updating the gradients </h2>
947943
$$
948944

949945

950-
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
951-
<h2 id="example-to-the-calculation-of-gradient">Example to the calculation of gradient </h2>
952-
953-
<p>Consider a simple NN in which the inputs \( \boldsymbol{x} \), the weights
954-
\( \boldsymbol{W} \), the biases \( \boldsymbol{b} \) and the ouputs
955-
\( \boldsymbol{\tilde{y}}=f(\boldsymbol{x};\boldsymbol{\Theta}) \) are just scalars and that we
956-
have two layers only, that is the output layer is labeled with \( L=2 \).
957-
</p>
958-
959-
<p>Our output is then (no boldfaced symbols since all quantities are scalars)</p>
960-
$$
961-
\tilde{y}=f(x;Theta))=\sigma_{L=2}(w_2\sigma_1(w_1x+b_1)+b_2).
962-
$$
963-
964-
<p>For the back-propagation algorithm we will need various partial derivatives. One of these is</p>
965-
$$
966-
\frac{\partial f(x;\Theta)}{\partial w_1}=
967-
$$
968-
969-
<!-- \sigma_2(w_2\sigma_1(w_1x+b_1)+b_2)\times w_2\sigma_1(w_1x+b_1)x. -->
970-
971946
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
972947
<h2 id="fine-tuning-neural-network-hyperparameters">Fine-tuning neural network hyperparameters </h2>
973948

0 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)