Skip to content

Fix outdated torch_logs tutorial by removing CUDA device check#3770

Open
ManasVardhan wants to merge 4 commits intopytorch:mainfrom
ManasVardhan:fix-torch-logs-docs
Open

Fix outdated torch_logs tutorial by removing CUDA device check#3770
ManasVardhan wants to merge 4 commits intopytorch:mainfrom
ManasVardhan:fix-torch-logs-docs

Conversation

@ManasVardhan
Copy link

Fix

The torch_logs.py tutorial was wrapped in a CUDA device capability check:

if torch.cuda.get_device_capability() < (7, 0):
    print("Skipping because torch.compile is not supported on this device.")
else:
    # ... entire tutorial ...

When built on CI without a compatible GPU, this caused the rendered docs to show only "Skipping because torch.compile is not supported on this device." instead of actual compilation log output.

Changes

  • Removed the CUDA device capability check that gated the entire tutorial
  • Use dynamic device selection (cuda if available, otherwise cpu)
  • torch.compile works on CPU, so the tutorial now produces meaningful log output regardless of build environment

Fixes pytorch/pytorch#137285

cc @PaliC @mlazos

The tutorial was wrapped in a CUDA device capability check that caused
the entire example to be skipped when built on CI without a compatible
GPU. This resulted in the docs showing 'Skipping because torch.compile
is not supported on this device' instead of actual log output.

torch.compile works on CPU, so this fix removes the device check and
uses a dynamic device selection (CUDA if available, otherwise CPU).
This ensures the tutorial produces meaningful log output regardless of
the build environment.

Fixes pytorch/pytorch#137285
@pytorch-bot
Copy link

pytorch-bot bot commented Feb 16, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3770

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 971bfec with merge base 6f800e1 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla
Copy link

meta-cla bot commented Feb 16, 2026

Hi @ManasVardhan!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@meta-cla meta-cla bot added the cla signed label Feb 17, 2026
@ManasVardhan
Copy link
Author

I've signed the CLA. Could you re-run the CLA check? Thanks!

@svekars svekars requested a review from mlazos February 17, 2026 16:41
@mlazos
Copy link
Contributor

mlazos commented Mar 3, 2026

Hi can you just also allow this on cpu - torch.compile is not supported on cuda devices lower than that version and users can hit strange errors if it isn't gated.

Copy link
Contributor

@mlazos mlazos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Take a look at my other comments, please leave the version gate, but also allow cpu.

@ManasVardhan
Copy link
Author

Hey, good call! Updated the gate so it only skips on CUDA devices with capability < 7.0. If there's no CUDA or if the device is good enough, it just picks the right device (CPU or CUDA) and runs normally. Should work fine on CPU-only machines now too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Docs are little bit outdated for torch logs

4 participants