Skip to content

YixunLiang/UniTEX

Repository files navigation

UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes

Yixun Liang$^{*}$, Kunming Luo$^{{*}}$, Xiao Chen$^{{*}}$, Rui Chen, Hongyu Yan, Weiyu Li,Jiarui Liu, Ping Tan$†$

$*$: Equal contribution. $†$: Corrsponding author.

The paper is now accepted to CVPR 2026.

📺 Video

UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes

Please click to watch the 3-minute video introduction of our project.

🎏 Abstract

We present a 2 stage texturing framework, named the UniTEX, to achieve high-fidelity textures from any 3D shapes.

CLICK for the full abstract

We present UniTEX, a novel two-stage 3D texture generation framework to create high-quality, consistent textures for 3D assets. Existing approaches predominantly rely on UV-based inpainting to refine textures after reprojecting the generated multi-view images onto the 3D shapes, which introduces challenges related to topological ambiguity. To address this, we propose to bypass the limitations of UV mapping by operating directly in a unified 3D functional space. Specifically, we first propose a novel framework that lifts texture generation into 3D space via Texture Functions (TFs)—a continuous, volumetric representation that maps any 3D point to a texture value based solely on surface proximity, independent of mesh topology. Then, we propose to predict these TFs directly from images and geometry inputs using a transformer-based Large Texturing Model (LTM). To further enhance texture quality and leverage powerful 2D priors, we develop an advanced LoRA-based strategy for efficiently adapting large-scale Diffusion Transformers (DiTs) for high-quality multi-view texture synthesis as our first stage. Extensive experiments demonstrate that UniTEX achieves superior visual quality and texture integrity compared to existing approaches, offering a generalizable and scalable solution for automated 3D texture generation.

🚧 Todo

  • Release the basic texturing codes with flux lora checkpoints
  • Release the training code of flux (lora) (UniTEX-FLUX)
  • Release LTM checkpoints. (we also refine the codebase, make that our code can download checkpoints automatically)

🔧 Installation

run bash env.sh to prepare your environment.

We also provide a step-by-step guide to install the dependencies via requirements.txt:

conda create -n unitex python=3.10 --yes
conda activate unitex
conda install cudatoolkit=11.8 --yes # check nvcc -V, it should be 11.8
pip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 --index-url https://download.pytorch.org/whl/cu118
pip install kaolin -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-2.4.1_cu118.html
pip install -r requirements.txt
pip install git+https://github.com/NVlabs/nvdiffrast.git --no-build-isolation

If you meet the issues about slangtorch, just try to replace the previous slangtorch with this specific version: slangtorch==1.2.0

Note We noticed that some users encountered errors when using slangtorch==1.3.7. If you encounter the same issue, you can try reinstalling slangtorch==1.3.4, which should resolve the problem. (Check This issue, it also about how to use our repo under cu121, thanks to HwanHeo)

How to use?

Run the following code, code will download checkpoints automatically:

from pipeline import CustomRGBTextureFullPipeline
import os
rgb_tfp = CustomRGBTextureFullPipeline(super_resolutions=False,
                                        filt_gradient_points=False,
                                        filt_large_angle_points=True,
                                        seed = 63)

test_image_path = {your reference image}
test_mesh_path = {your input mesh}
save_root = 'outputs/{your save folder}'
os.makedirs(save_root, exist_ok=True)
rgb_tfp(save_root, test_image_path, test_mesh_path, clear_cache=False)

you can also use

python run.py

to run our given example.

SR: if you want to use super_resolutions, prepare the ckpts of SR model TSD_SR and change the default dir in TSD_SR/sr_pipeline.py ln 30-32

parser.add_argument("--pretrained_model_name_or_path", type=str, default="stabilityai/stable-diffusion-3-medium-diffusers/", help='path to the pretrained sd3')
parser.add_argument("--lora_dir", type=str, default="your_lora_dir", help='path to tsd-sr lora weights')
parser.add_argument("--embedding_dir", type=str, default="your_emb_dir", help='path to prompt embeddings')

Then, tune super_resolutions in run.py to true.

Some dilation tricks of inpainting

During inpainting, we provide two additional point filtering tricks:

filt_large_angle_points: Enabled by default in LTM. Filters out points where the angle between the model normal and the rendering view direction is too large.

filt_gradient_points: We observed that fine-grained high-frequency details often mismatch, especially after super-resolution (SR). This filter removes projected points in high‑frequency regions based on image gradient and fills them via inpainting. Note that this filter removes all high‑frequency details indiscriminately, so enabling it may reduce fine details. Since LTM generation is inherently less sharp than inpainting-based results, you can choose whether to enable it based on your needs.

You can open them when initializing the pipeline, just like shown in run.py.

Training your own FLUX lora

We also provide training code for texture generation and de-lighting, which can be adapted for other tasks as well. Please refer to (UniTEX-FLUX) for more details.

📍 Citation

If you find this project useful for your research, please cite:

@article{liang2025UnitTEX,
  title={UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes},
  author={Yixun Liang and Kunming Luo and Xiao Chen and Rui Chen and Hongyu Yan and Weiyu Li and Jiarui Liu and Ping Tan},
  journal={arXiv preprint arXiv:2505.23253},
  year={2025}
}

7. Acknowledgments

We would like to thank the following projects: FLUX, DINOv2, CLAY, Michelango, CraftsMan3D, TripoSG, Dora, Hunyuan3D 2.0, TSD_SR,Cosmos Tokenizer, diffusers and HuggingFace for their open exploration and contributions. We would also like to express our gratitude to the closed-source 3D generative platforms Tripo, Rodin, and Hunyuan2.5 for providing such impressive geometry resources to the community. We sincerely appreciate their efforts and contributions.

About

[CVPR 2026]Official implementation of "UniTEX: Universal High Fidelity Generative Texturing for 3D Shapes"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages