Description
The Defining New autograd Functions tutorial contains outdated patterns that should be modernized.
Changes needed
Suboptimal / Outdated Patterns
| Issue |
Current Code |
Modern Alternative |
Notes |
| Hardcoded CPU device with commented-out CUDA |
device = torch.device("cpu") / # device = torch.device("cuda:0") |
device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu" |
PyTorch 2.4 introduced accelerator-agnostic device selection. |
| autograd.Function using forward(ctx, ...) for context |
def forward(ctx, input): |
Consider using @staticmethod def forward(input): with separate def setup_context(ctx, inputs, output): |
PyTorch 2.0+ introduced setup_context as the preferred pattern for new code; old style still works but is no longer recommended. |
Files
Description
The Defining New autograd Functions tutorial contains outdated patterns that should be modernized.
Changes needed
Suboptimal / Outdated Patterns
device = torch.device("cpu") / # device = torch.device("cuda:0")device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu"def forward(ctx, input):Consider using @staticmethod def forward(input): with separate def setup_context(ctx, inputs, output):Files
beginner_source/examples_autograd/polynomial_custom_function.py