Description
The Neural Networks tutorial contains outdated patterns that should be modernized.
Changes needed
Suboptimal / Outdated Patterns
| Issue |
Current Code |
Modern Alternative |
Notes |
| Old-style super() call |
super(Net, self).__init__() |
super().__init__() |
Python 3+ allows argument-less super(). All other tutorials in this blitz series already use super().init(). |
| Accessing .data for parameter update (in commented code) |
f.data.sub_(f.grad.data * learning_rate) |
with torch.no_grad(): f -= f.grad * learning_rate or use an optimizer |
Accessing .data bypasses autograd safety checks. The code is in a comment block but still teaches an outdated pattern. |
| Using net.zero_grad() instead of optimizer |
net.zero_grad() |
optimizer.zero_grad() (when an optimizer is available) |
net.zero_grad() works but optimizer.zero_grad() is the idiomatic pattern shown in the optimizer code block below it. |
Files
Description
The Neural Networks tutorial contains outdated patterns that should be modernized.
Changes needed
Suboptimal / Outdated Patterns
super(Net, self).__init__()super().__init__()f.data.sub_(f.grad.data * learning_rate)with torch.no_grad(): f -= f.grad * learning_rate or use an optimizernet.zero_grad()optimizer.zero_grad() (when an optimizer is available)Files
beginner_source/blitz/neural_networks_tutorial.py