Conversation
There was a problem hiding this comment.
Pull request overview
This PR fixes Windows build compatibility for modern CUDA/MSVC toolchains (CUDA 12+, Torch 2.9, Python 3.13) by addressing narrowing conversion errors and adding platform-specific compiler flags.
Changes:
- Added explicit
static_cast<int64_t>()conversions fortorch::from_blobandtorch::zerosto fix MSVC narrowing conversion errors - Removed non-standard
dsuffix from floating-point literals to improve C++ standard compliance - Added Windows-specific MSVC compiler flags (
/permissive-,/Zc:__cplusplus,/EHsc) for C++17 standard compliance - Added
build_windows.batscript for automated Windows build process - Changed Linux C++ optimization level from
-O3to-O2
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| o-voxel/src/io/svo.cpp | Fixed narrowing conversions in tensor creation from vector sizes |
| o-voxel/src/io/filter_parent.cpp | Fixed narrowing conversions in tensor shape specifications |
| o-voxel/src/io/filter_neighbor.cpp | Fixed narrowing conversions in tensor shape specifications |
| o-voxel/src/convert/flexible_dual_grid.cpp | Removed non-standard double literal suffixes for portability |
| o-voxel/setup.py | Added Windows-specific compiler flags and platform detection; changed Linux optimization level |
| build_windows.bat | Added Windows build automation script with version checks and dependency installation |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if not "%CUDA_VERSION%"=="13.0" ( | ||
| echo %red%ERROR: CUDA 13.0 required%reset% | ||
| echo Detected CUDA: %CUDA_VERSION% | ||
| exit /b 1 | ||
| ) |
There was a problem hiding this comment.
The CUDA version check is very strict, requiring exactly CUDA 13.0. However, the description mentions "CUDA 12+" support. Users with CUDA 12.x might be unable to use this build script even though it should theoretically work. Consider making this check more flexible to accept CUDA 12.x and 13.x versions, or update the error message to clarify that only CUDA 13.0 is officially supported by this script.
| %PYTHON_PATH% -m pip install git+https://github.com/JeffreyXiang/FlexGEMM --no-build-isolation --no-cache-dir | ||
| popd | ||
|
|
||
| echo %green%FlexGEMM build finished...%reset% | ||
|
|
||
| :: =============================== | ||
| :: CuMesh | ||
| :: =============================== | ||
| echo. | ||
| echo %green%Building CuMesh...%reset% | ||
|
|
||
| pushd cumesh | ||
| %PYTHON_PATH% -m pip uninstall -y cumesh | ||
| %PYTHON_PATH% -m pip install git+https://github.com/JeffreyXiang/CuMesh --no-build-isolation --no-cache-dir |
There was a problem hiding this comment.
The PR description mentions that PRs #11 (FlexGEMM) and #21 (CuMesh) need to be merged first, but this script installs directly from the main branches of those repositories without checking if those PRs are merged. If those PRs haven't been merged, the build may fail. Consider adding a note in the script comments about this prerequisite or checking for the required fixes.
|
|
||
| pushd nvdiffrast | ||
| %PYTHON_PATH% -m pip uninstall -y nvdiffrast | ||
| %PYTHON_PATH% -m pip install git+https://github.com/NVlabs/nvdiffrast.git --no-build-isolation --no-cache-dir |
There was a problem hiding this comment.
This build script installs nvdiffrast directly from the moving HEAD of the NVlabs/nvdiffrast GitHub repository via pip install git+https://github.com/NVlabs/nvdiffrast.git, which introduces an unpinned third-party dependency executed with your build/runtime privileges. If that repository or its dependency chain is compromised, an attacker could execute arbitrary code during installation or later when the package is imported on any system running this script. To mitigate this supply-chain risk, pin nvdiffrast to a trusted immutable identifier (such as a specific commit, tag, or published wheel version) and avoid executing unverified remote code in automated build scripts.
Fix Windows build with modern CUDA / MSVC toolchain
Problems
Modern Windows builds (Torch 2.9 + CUDA 12/13 + Python 3.13) require explicit compiler flags to make MSVC behave consistently with Linux toolchains
Also added a build_windows.bat script to build all dependencies(nvdiffrast, FlexGEMM, CuMesh, o-voxel)
Make sure you first merge these PR's:
JeffreyXiang/FlexGEMM#11
JeffreyXiang/CuMesh#21
Tested on