diff --git a/README.md b/README.md
index 57c1a4124..c0a643eec 100644
--- a/README.md
+++ b/README.md
@@ -97,22 +97,31 @@ Includes [multiple gpus quantization](https://github.com/intel/auto-round/blob/m
### Install from pypi
```bash
-# CPU/Intel GPU/CUDA
+# CPU(Xeon)/GPU(CUDA)
pip install auto-round
-# HPU
+# HPU(Gaudi)
+# install inside the hpu docker container, e.g. vault.habana.ai/gaudi-docker/1.23.0/ubuntu24.04/habanalabs/pytorch-installer-2.9.0:latest
pip install auto-round-hpu
+
+# XPU(Intel GPU)
+pip install torch --index-url https://download.pytorch.org/whl/xpu
+pip install auto-round
```
Build from Source
```bash
- # CPU/Intel GPU/CUDA
+ # CPU(Xeon)/GPU(CUDA)
pip install .
- # HPU
+ # HPU(Gaudi)
python setup.py install hpu
+
+ # XPU(Intel GPU)
+ pip install torch --index-url https://download.pytorch.org/whl/xpu
+ pip install .
```
diff --git a/README_CN.md b/README_CN.md
index 54212c74d..4ca044b73 100644
--- a/README_CN.md
+++ b/README_CN.md
@@ -86,22 +86,31 @@ AutoRound 是专为大语言模型(LLMs)和视觉-语言模型(VLMs)设
### 从 PyPI 安装
```shell
-# CPU / Intel GPU / CUDA
+# CPU(Xeon)/GPU(CUDA)
pip install auto-round
-# HPU
+# HPU(Gaudi)
+# 在 hpu docker container 中安装, e.g. vault.habana.ai/gaudi-docker/1.23.0/ubuntu24.04/habanalabs/pytorch-installer-2.9.0:latest
pip install auto-round-hpu
+
+# XPU(Intel GPU)
+pip install torch --index-url https://download.pytorch.org/whl/xpu
+pip install auto-round
```
从源码编译安装
```bash
- # CPU/Intel GPU/CUDA
+ # CPU(Xeon)/GPU(CUDA)
pip install .
- # HPU
+ # HPU(Gaudi)
python setup.py install hpu
+
+ # XPU(Intel GPU)
+ pip install torch --index-url https://download.pytorch.org/whl/xpu
+ pip install .
```
diff --git a/setup.py b/setup.py
index c78aa1d78..2e391a8f1 100644
--- a/setup.py
+++ b/setup.py
@@ -137,7 +137,6 @@ def fetch_requirements(path):
package_name = "auto-round"
- # From v0.9.3, auto-round-hpu will be published to replace auto-round-lib.
hpu_build = "hpu" in sys.argv
if hpu_build:
sys.argv.remove("hpu")