Skip to content

Update Neural Networks tutorial — modernize super(), zero_grad, and .data usage #3877

@sekyondaMeta

Description

@sekyondaMeta

Description

The Neural Networks tutorial contains outdated patterns that should be modernized.

Changes needed

Suboptimal / Outdated Patterns

Issue Current Code Modern Alternative Notes
Old-style super() call super(Net, self).__init__() super().__init__() Python 3+ allows argument-less super(). All other tutorials in this blitz series already use super().init().
Accessing .data for parameter update (in commented code) f.data.sub_(f.grad.data * learning_rate) with torch.no_grad(): f -= f.grad * learning_rate or use an optimizer Accessing .data bypasses autograd safety checks. The code is in a comment block but still teaches an outdated pattern.
Using net.zero_grad() instead of optimizer net.zero_grad() optimizer.zero_grad() (when an optimizer is available) net.zero_grad() works but optimizer.zero_grad() is the idiomatic pattern shown in the optimizer code block below it.

Files

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions