Skip to content

Commit 1236773

Browse files
committed
update
1 parent 3a248ed commit 1236773

8 files changed

Lines changed: 288 additions & 140 deletions

File tree

doc/pub/week13/html/week13-bs.html

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -124,6 +124,8 @@
124124
None,
125125
'code-examples-using-keras'),
126126
('Code in PyTorch for VAEs', 2, None, 'code-in-pytorch-for-vaes'),
127+
('Pros and cons about VAEs', 2, None, 'pros-and-cons-about-vaes'),
128+
('Pros and cons about VAEs', 2, None, 'pros-and-cons-about-vaes'),
127129
('Diffusion models, basics', 2, None, 'diffusion-models-basics'),
128130
('Problems with probabilistic models',
129131
2,
@@ -252,6 +254,8 @@
252254
<!-- navigation toc: --> <li><a href="#computing-the-gradients" style="font-size: 80%;">Computing the gradients</a></li>
253255
<!-- navigation toc: --> <li><a href="#code-examples-using-keras" style="font-size: 80%;">Code examples using Keras</a></li>
254256
<!-- navigation toc: --> <li><a href="#code-in-pytorch-for-vaes" style="font-size: 80%;">Code in PyTorch for VAEs</a></li>
257+
<!-- navigation toc: --> <li><a href="#pros-and-cons-about-vaes" style="font-size: 80%;">Pros and cons about VAEs</a></li>
258+
<!-- navigation toc: --> <li><a href="#pros-and-cons-about-vaes" style="font-size: 80%;">Pros and cons about VAEs</a></li>
255259
<!-- navigation toc: --> <li><a href="#diffusion-models-basics" style="font-size: 80%;">Diffusion models, basics</a></li>
256260
<!-- navigation toc: --> <li><a href="#problems-with-probabilistic-models" style="font-size: 80%;">Problems with probabilistic models</a></li>
257261
<!-- navigation toc: --> <li><a href="#diffusion-models" style="font-size: 80%;">Diffusion models</a></li>
@@ -1293,6 +1297,26 @@ <h2 id="code-in-pytorch-for-vaes" class="anchor">Code in PyTorch for VAEs </h2>
12931297
</div>
12941298

12951299

1300+
<!-- !split -->
1301+
<h2 id="pros-and-cons-about-vaes" class="anchor">Pros and cons about VAEs </h2>
1302+
1303+
<ol>
1304+
<li> Generative Capability: VAEs are powerful generative models that can produce new data similar to the training data.</li>
1305+
<li> Latent Space Structure: The latent space learned by VAEs is continuous, structured, and often interpretable, which makes it useful for interpolation, clustering, and downstream tasks.</li>
1306+
<li> Theoretical Foundation: VAEs are grounded in probability theory and Bayesian inference, offering a principled approach to modeling uncertainty.</li>
1307+
<li> Efficient Training: Unlike GANs (discussed later), VAEs typically train more stably and don&#8217;t suffer from mode collapse.</li>
1308+
<li> Regularization: The KL divergence term in the loss encourages the latent space to follow a standard normal distribution, aiding generalization.
1309+
<!-- o Semi-supervised Learning: VAEs can be adapted for semi-supervised tasks due to their probabilistic framework. --></li>
1310+
</ol>
1311+
<!-- !split -->
1312+
<h2 id="pros-and-cons-about-vaes" class="anchor">Pros and cons about VAEs </h2>
1313+
<ol>
1314+
<li> Blurry Reconstructions: VAEs often produce blurrier outputs compared to GANs due to the use of a probabilistic decoder and the Gaussian likelihood assumption.</li>
1315+
<li> Posterior Collapse: The model might ignore the latent variables entirely (especially with powerful decoders), leading to <b>posterior collapse</b>.</li>
1316+
<li> Trade-off in Loss Function: Balancing the reconstruction term and the KL divergence can be tricky, and tuning this balance is crucial.</li>
1317+
<li> Assumed Distribution Limitations: The prior (usually Gaussian) and the approximated posterior may not capture the true data distribution well.</li>
1318+
<li> Less Sharp Compared to GANs: In image generation tasks, the samples are usually less sharp and detailed than those generated by GANs.</li>
1319+
</ol>
12961320
<!-- !split -->
12971321
<h2 id="diffusion-models-basics" class="anchor">Diffusion models, basics </h2>
12981322

doc/pub/week13/html/week13-reveal.html

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1276,6 +1276,30 @@ <h2 id="code-in-pytorch-for-vaes">Code in PyTorch for VAEs </h2>
12761276
</div>
12771277
</section>
12781278

1279+
<section>
1280+
<h2 id="pros-and-cons-about-vaes">Pros and cons about VAEs </h2>
1281+
1282+
<ol>
1283+
<p><li> Generative Capability: VAEs are powerful generative models that can produce new data similar to the training data.</li>
1284+
<p><li> Latent Space Structure: The latent space learned by VAEs is continuous, structured, and often interpretable, which makes it useful for interpolation, clustering, and downstream tasks.</li>
1285+
<p><li> Theoretical Foundation: VAEs are grounded in probability theory and Bayesian inference, offering a principled approach to modeling uncertainty.</li>
1286+
<p><li> Efficient Training: Unlike GANs (discussed later), VAEs typically train more stably and don&#8217;t suffer from mode collapse.</li>
1287+
<p><li> Regularization: The KL divergence term in the loss encourages the latent space to follow a standard normal distribution, aiding generalization.
1288+
<!-- o Semi-supervised Learning: VAEs can be adapted for semi-supervised tasks due to their probabilistic framework. --></li>
1289+
</ol>
1290+
</section>
1291+
1292+
<section>
1293+
<h2 id="pros-and-cons-about-vaes">Pros and cons about VAEs </h2>
1294+
<ol>
1295+
<p><li> Blurry Reconstructions: VAEs often produce blurrier outputs compared to GANs due to the use of a probabilistic decoder and the Gaussian likelihood assumption.</li>
1296+
<p><li> Posterior Collapse: The model might ignore the latent variables entirely (especially with powerful decoders), leading to <b>posterior collapse</b>.</li>
1297+
<p><li> Trade-off in Loss Function: Balancing the reconstruction term and the KL divergence can be tricky, and tuning this balance is crucial.</li>
1298+
<p><li> Assumed Distribution Limitations: The prior (usually Gaussian) and the approximated posterior may not capture the true data distribution well.</li>
1299+
<p><li> Less Sharp Compared to GANs: In image generation tasks, the samples are usually less sharp and detailed than those generated by GANs.</li>
1300+
</ol>
1301+
</section>
1302+
12791303
<section>
12801304
<h2 id="diffusion-models-basics">Diffusion models, basics </h2>
12811305

doc/pub/week13/html/week13-solarized.html

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -151,6 +151,8 @@
151151
None,
152152
'code-examples-using-keras'),
153153
('Code in PyTorch for VAEs', 2, None, 'code-in-pytorch-for-vaes'),
154+
('Pros and cons about VAEs', 2, None, 'pros-and-cons-about-vaes'),
155+
('Pros and cons about VAEs', 2, None, 'pros-and-cons-about-vaes'),
154156
('Diffusion models, basics', 2, None, 'diffusion-models-basics'),
155157
('Problems with probabilistic models',
156158
2,
@@ -1226,6 +1228,26 @@ <h2 id="code-in-pytorch-for-vaes">Code in PyTorch for VAEs </h2>
12261228
</div>
12271229

12281230

1231+
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
1232+
<h2 id="pros-and-cons-about-vaes">Pros and cons about VAEs </h2>
1233+
1234+
<ol>
1235+
<li> Generative Capability: VAEs are powerful generative models that can produce new data similar to the training data.</li>
1236+
<li> Latent Space Structure: The latent space learned by VAEs is continuous, structured, and often interpretable, which makes it useful for interpolation, clustering, and downstream tasks.</li>
1237+
<li> Theoretical Foundation: VAEs are grounded in probability theory and Bayesian inference, offering a principled approach to modeling uncertainty.</li>
1238+
<li> Efficient Training: Unlike GANs (discussed later), VAEs typically train more stably and don&#8217;t suffer from mode collapse.</li>
1239+
<li> Regularization: The KL divergence term in the loss encourages the latent space to follow a standard normal distribution, aiding generalization.
1240+
<!-- o Semi-supervised Learning: VAEs can be adapted for semi-supervised tasks due to their probabilistic framework. --></li>
1241+
</ol>
1242+
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
1243+
<h2 id="pros-and-cons-about-vaes">Pros and cons about VAEs </h2>
1244+
<ol>
1245+
<li> Blurry Reconstructions: VAEs often produce blurrier outputs compared to GANs due to the use of a probabilistic decoder and the Gaussian likelihood assumption.</li>
1246+
<li> Posterior Collapse: The model might ignore the latent variables entirely (especially with powerful decoders), leading to <b>posterior collapse</b>.</li>
1247+
<li> Trade-off in Loss Function: Balancing the reconstruction term and the KL divergence can be tricky, and tuning this balance is crucial.</li>
1248+
<li> Assumed Distribution Limitations: The prior (usually Gaussian) and the approximated posterior may not capture the true data distribution well.</li>
1249+
<li> Less Sharp Compared to GANs: In image generation tasks, the samples are usually less sharp and detailed than those generated by GANs.</li>
1250+
</ol>
12291251
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
12301252
<h2 id="diffusion-models-basics">Diffusion models, basics </h2>
12311253

doc/pub/week13/html/week13.html

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -228,6 +228,8 @@
228228
None,
229229
'code-examples-using-keras'),
230230
('Code in PyTorch for VAEs', 2, None, 'code-in-pytorch-for-vaes'),
231+
('Pros and cons about VAEs', 2, None, 'pros-and-cons-about-vaes'),
232+
('Pros and cons about VAEs', 2, None, 'pros-and-cons-about-vaes'),
231233
('Diffusion models, basics', 2, None, 'diffusion-models-basics'),
232234
('Problems with probabilistic models',
233235
2,
@@ -1303,6 +1305,26 @@ <h2 id="code-in-pytorch-for-vaes">Code in PyTorch for VAEs </h2>
13031305
</div>
13041306

13051307

1308+
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
1309+
<h2 id="pros-and-cons-about-vaes">Pros and cons about VAEs </h2>
1310+
1311+
<ol>
1312+
<li> Generative Capability: VAEs are powerful generative models that can produce new data similar to the training data.</li>
1313+
<li> Latent Space Structure: The latent space learned by VAEs is continuous, structured, and often interpretable, which makes it useful for interpolation, clustering, and downstream tasks.</li>
1314+
<li> Theoretical Foundation: VAEs are grounded in probability theory and Bayesian inference, offering a principled approach to modeling uncertainty.</li>
1315+
<li> Efficient Training: Unlike GANs (discussed later), VAEs typically train more stably and don&#8217;t suffer from mode collapse.</li>
1316+
<li> Regularization: The KL divergence term in the loss encourages the latent space to follow a standard normal distribution, aiding generalization.
1317+
<!-- o Semi-supervised Learning: VAEs can be adapted for semi-supervised tasks due to their probabilistic framework. --></li>
1318+
</ol>
1319+
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
1320+
<h2 id="pros-and-cons-about-vaes">Pros and cons about VAEs </h2>
1321+
<ol>
1322+
<li> Blurry Reconstructions: VAEs often produce blurrier outputs compared to GANs due to the use of a probabilistic decoder and the Gaussian likelihood assumption.</li>
1323+
<li> Posterior Collapse: The model might ignore the latent variables entirely (especially with powerful decoders), leading to <b>posterior collapse</b>.</li>
1324+
<li> Trade-off in Loss Function: Balancing the reconstruction term and the KL divergence can be tricky, and tuning this balance is crucial.</li>
1325+
<li> Assumed Distribution Limitations: The prior (usually Gaussian) and the approximated posterior may not capture the true data distribution well.</li>
1326+
<li> Less Sharp Compared to GANs: In image generation tasks, the samples are usually less sharp and detailed than those generated by GANs.</li>
1327+
</ol>
13061328
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
13071329
<h2 id="diffusion-models-basics">Diffusion models, basics </h2>
13081330

0 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)