Skip to content

Use C++20 for SYCL device compilation#3278

Merged
chuanqi129 merged 1 commit intointel:mainfrom
r-barnes:richard/cpp20
Apr 14, 2026
Merged

Use C++20 for SYCL device compilation#3278
chuanqi129 merged 1 commit intointel:mainfrom
r-barnes:richard/cpp20

Conversation

@r-barnes
Copy link
Copy Markdown
Contributor

@r-barnes r-barnes commented Apr 7, 2026

Add -std=${CPP_STD} to SYCL_KERNEL_OPTIONS so that the device compiler (icpx) uses C++20, matching the host compiler flags. This is needed because PyTorch now requires C++20 via header guards in ATen/ATen.h, and device compilation that includes these headers must also use C++20.

Add -std=${CPP_STD} to SYCL_KERNEL_OPTIONS so that the device compiler
(icpx) uses C++20, matching the host compiler flags. This is needed
because PyTorch now requires C++20 via header guards in ATen/ATen.h,
and device compilation that includes these headers must also use C++20.
Copilot AI review requested due to automatic review settings April 7, 2026 17:56
@r-barnes
Copy link
Copy Markdown
Contributor Author

r-barnes commented Apr 7, 2026

@EikanWang - can you review this PR as well?

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates SYCL device compilation flags so the device compiler uses the same C++ standard as the host compilation, addressing PyTorch’s newer requirement for C++20 when including ATen headers during device compilation.

Changes:

  • Add -std=${CPP_STD} to SYCL_KERNEL_OPTIONS so SYCL kernel/device compilation uses C++20 (via CPP_STD).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

pytorchmergebot pushed a commit to pytorch/pytorch that referenced this pull request Apr 7, 2026
#178150 enforces C++20 via #error guards in ATen/ATen.h and other headers. Any code — host or device — that includes these headers must be compiled with -std=c++20 or the build fails.

The SYCL build in torch-xpu-ops has two separate compilation passes: host compilation using gcc with flags from SYCL_HOST_FLAGS, and device kernel compilation using icpx with flags from SYCL_KERNEL_OPTIONS. Previously, both defaulted to -std=c++17.

intel/torch-xpu-ops#3272 fixed the host side by making CPP_STD unconditionally c++20, which flows into SYCL_HOST_FLAGS. Bumping xpu.txt to a36dd41 picks up this change, so host compilation now uses C++20 and no longer hits the #error guard.

The device side is not covered by intel/torch-xpu-ops#3272 — SYCL_KERNEL_OPTIONS does not include -std=${CPP_STD}. #179497 works around this by injecting -std=c++20 into SYCL_FLAGS from PyTorch's xpu.cmake, which flows into SYCL_COMPILE_FLAGS and reaches the device compiler. This workaround remains necessary until a follow-up PR (intel/torch-xpu-ops#3278) lands in torch-xpu-ops to add -std=${CPP_STD} to SYCL_KERNEL_OPTIONS directly, at which point #179497 can be reverted.
Pull Request resolved: #179613
Approved by: https://github.com/malfet, https://github.com/Skylion007
Copy link
Copy Markdown
Contributor

@EikanWang EikanWang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@r-barnes
Copy link
Copy Markdown
Contributor Author

@EikanWang - do you know if the CI failures are reliable? If not, could you merge?

@chuanqi129 chuanqi129 merged commit 5aa1a01 into intel:main Apr 14, 2026
27 of 30 checks passed
@EikanWang
Copy link
Copy Markdown
Contributor

Oh, sorry for the late response, @r-barnes . And Thanks @chuanqi129 for merging it.

We have submitted a PR to update the commit pin of torch-xpu-ops that includes this change - https://github.com/pytorch/pytorch/pull/174168/changes#diff-0b051ab23955f71f42a99c36b4a3b3375ff7a917875586dcca90e306b87b3a5eR1 FYI

After that, we will update xpu.cmake accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants