Built with Courage. Built with Fire. Built with FAME.
This project provides a complete and clean build kit for compiling a working PyTorch ROCm version on AMD GPUs β without the usual headaches involving ROCm, HIP, OpenMP, or Git submodules.
You will find everything needed to build, install, and test PyTorch from source, generating a valid .whl package for your local machine.
| File/Folder | Purpose |
|---|---|
build_torch.sh |
Script to build PyTorch from source |
install_torch.sh |
Script to install the generated Wheel file |
test_torch.sh |
Script to test PyTorch import and ROCm status |
fame_torch_freeze.txt |
Environment freeze (package versions) |
pytorch-wheel/ |
(Optional) Folder containing the .whl file |
MY_WHEELI_NOTES.md |
Notes related to the Wheel build |
PyTorch_ROCm_Build_Notes.md |
Extended ROCm build notes |
README.md |
This document β the Fame Masterplan |
- Activate your ROCm environment
source ~/rocm_env/bin/activate- Navigate to the project folder
cd fame-pytorch-kit/- Build PyTorch
bash build_torch.shThe generated .whl file will be located in pytorch/dist/.
USE_ROCM=1
CMAKE_ARGS="-DROCM_ARCH=gfx1100"
and proper ROCM/HIP paths via LD_LIBRARY_PATH, PATH, etc.
π If you skip this and install a .whl built only via build_torch.sh, you'll get a CPU-only PyTorch build. (Which is also valid β and a great first success. ROCm is optional and modular.)
bash install_torch.shcd pytorch/dist/
pip install torch-2.8.0a0+gitc402b3b-cp312-cp312-linux_x86_64.whl
Make sure your Python environment (venv) is activated!
bash test_torch.shOr manually:
python -c "import torch; print(torch.cuda.is_available())"
python -c "import torch; print(torch.cuda.get_device_name(0))"
python -c "import torch; print(torch.version.hip)"Expected: True, GPU Name, HIP version string.
This project is licensed under the MIT License.
Fame Kit proudly built by sbeierle with π‘οΈ Courage, π₯ Fire, and π Fame.
=======