Hyeongjin Nam*1, Donghwan Kim*1, Gyeongsik Moon†2, Kyoung Mu Lee1
1Seoul National University, 2Korea University
(*Equal contribution)
PARTE is a 3D human reconstruction framework that recovers realistsic human textures that is well-aligned to each human parts. This framework estimates 3D part segmentations of the human surface and utilizes them as main guidance for reconstructing human textures.
We recommend you to use an Anaconda virtual environment. Our latest PARTE model is tested on Python 3.10.16, PyTorch 2.1.2, CUDA 12.1.
Setup the environment with the script below.
# Initialize conda environment
conda create -n parte python=3.10
conda activate parte
# Install PyTorch
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu121
# Install remaining packages
bash scripts/install.sh
# Install thirdparty modules
bash scripts/clone_thirdparties.sh
Please follow the Download Instruction to download all the required data files.
Download our released checkpoints of SegmentNet and PartDiffusion through our Huggingface repo under ${ROOT}/data/checkpoints/.
- To run PARTE with your image, please prepare an
yamlfile that contains text prompts for each body parts as in${ROOT}/configs/prompts.yaml, and run the following code.
cd main
python demo.py --img_path {PATH/TO/YOUR_IMAGE} --prompt_path {PATH/TO/YOUR_PROMPT} --exp_dir {PATH/TO/SAVE_DIR}
- (Optional) You can also automatically generate text prompts using GPT-4o with the
--run_vllmoption. Please place your OpenAI API key to${ROOT}/data/openai.envand run the following code.
cd main
python demo.py --img_path {PATH/TO/YOUR_IMAGE} --exp_dir {PATH/TO/SAVE_DIR} --run_vllm
We thank:
- TeCH for 3D human mesh reconsturction.
- Sapiens for human image segmentation.
- InstanceDiffusion for PartDiffusion training setup.
@inproceedings{nam2025parte,
title = {PARTE: Part-Guided Texturing for 3D Human Reconstruction from a Single Image},
author = {Nam, Hyeongjin and Kim, Donghwan and Moon, Gyeongsik and Lee, Kyoung Mu},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision},
year = {2025}
}

