Skip to content

Update Custom autograd Function example — use torch.accelerator, modernize Function pattern #3880

@sekyondaMeta

Description

@sekyondaMeta

Description

The Defining New autograd Functions tutorial contains outdated patterns that should be modernized.

Changes needed

Suboptimal / Outdated Patterns

Issue Current Code Modern Alternative Notes
Hardcoded CPU device with commented-out CUDA device = torch.device("cpu") / # device = torch.device("cuda:0") device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu" PyTorch 2.4 introduced accelerator-agnostic device selection.
autograd.Function using forward(ctx, ...) for context def forward(ctx, input): Consider using @staticmethod def forward(input): with separate def setup_context(ctx, inputs, output): PyTorch 2.0+ introduced setup_context as the preferred pattern for new code; old style still works but is no longer recommended.

Files

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions