Skip to content

Conversation

@sssshhhhhh
Copy link
Contributor

Closes #1072

Thanks to the work of everyone at arlo-phoenix/CTranslate2-rocm and the linked issue.
Windows can be compiled with this script: https://github.com/sssshhhhhh/CTranslate2/blob/745f0b46aea94acef514185ed5facbb3fecd6dcd/python/tools/prepare_build_environment_windows.ps1
Linux can follow instructions at: https://github.com/arlo-phoenix/CTranslate2-rocm/blob/rocm/README_ROCM.md

Currently targeting rocm 7.1.1. Passes all tests and successfully outputs for whisper and gemma3. For now, just enough changes to build for amd, specific optimisations like flash attention for the future.
Some questions:
Should having prebuilt whls be a goal or would letting people build themselves be fine?
How should packaging be handled? My windows whls currently need the separate install of rocm_sdk_libraries_custom and include amdhip64_7.dll/amd_comgr0701.dll. Whls are 58MB each, removing the 2 dlls drops it to 12MB.
What should be targeted? Currently I'm doing rocm 7 supported rdna, cdna should work but wave size isn't optimal (nvidia/rdna uses 32). Also unsure about rdna2, this pr should work but its support seems bad and I don't have any to test.

@jordimas
Copy link
Collaborator

Currently targeting rocm 7.1.1. Passes all tests and successfully outputs for whisper and gemma3. For now, just enough changes to build for amd, specific optimisations like flash attention for the future. Some questions: Should having prebuilt whls be a goal or would letting people build themselves be fine? How should packaging be handled? My windows whls currently need the separate install of rocm_sdk_libraries_custom and include amdhip64_7.dll/amd_comgr0701.dll. Whls are 58MB each, removing the 2 dlls drops it to 12MB. What should be targeted? Currently I'm doing rocm 7 supported rdna, cdna should work but wave size isn't optimal (nvidia/rdna uses 32). Also unsure about rdna2, this pr should work but its support seems bad and I don't have any to test.

The distribution part is tricky for rocm. My recommendation from minimum to best:

  1. Be able to compile it (currently not possible, good progress)
  2. Provide a Dockerfile
  3. Provide a whl package

I will start with 1 and 2.

@sssshhhhhh
Copy link
Contributor Author

sssshhhhhh commented Jan 28, 2026

Added docker and windows whls (artifacts). I give up fixing linux whl, it's dependency hell between cibw/rocm.

Edited see new instructions below

@sssshhhhhh
Copy link
Contributor Author

I lied, I think I figured out linux whls, will add soon

@sssshhhhhh
Copy link
Contributor Author

sssshhhhhh commented Jan 31, 2026

Ok actually done now. Currently building for rocm 7.2

Linux:
Install rocm https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html
Get rocm-hip-libraries which should include all necessary libs and install ctranslate2 whl from github actions artifacts.
Necessary rocm libs should be in $ROCM_PATH/lib, and also needs openmp which can be found at $ROCM_PATH/lib/llvm/lib/libomp.so; $ROCM_PATH is usually /opt/rocm

Windows:
https://rocm.docs.amd.com/projects/radeon-ryzen/en/latest/docs/install/installrad/windows/install-pytorch.html
Follow prerequisites then install rocm_sdk_core/rocm_sdk_libraries_custom from step 1
and install ctranslate2 whl from github actions artifacts.
If using torch at the same time you might get OMP: Error 15: Initializing etc. Either symlink the omp in site-packages/torch/lib/ to the omp in site-packages/ctranslate2 to fix or set KMP_DUPLICATE_LIB_OK.

@jordimas
Copy link
Collaborator

jordimas commented Feb 1, 2026

Hello @sssshhhhhh!

For the next release I plan to publish the roc wheel as part of the release:

https://github.com/jordimas/CTranslate2/releases

And also the docker images:

https://github.com/jordimas/CTranslate2/pkgs/container/ctranslate2

These will be published in https://github.com/OpenNMT/CTranslate2, these tests are in my fork just to test the process.

Let me know if these wheels and Docker work

@sssshhhhhh
Copy link
Contributor Author

All works but linux needs some env variables set, new commits should remove that

@jordimas
Copy link
Collaborator

jordimas commented Feb 2, 2026

Thanks, @sssshhhhhh This is an outstanding work! Thanks

@jordimas jordimas merged commit 68917da into OpenNMT:master Feb 2, 2026
20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature request: AMD GPU support with oneDNN AMD support

2 participants