Run HuggingFace Models on MindSpore with Zero Code Changes
The easiest way to use 200,000+ HuggingFace models on Ascend NPU, GPU, and CPU
Quick Start • Features • Installation • Why MindNLP • Documentation
MindNLP bridges the gap between HuggingFace's massive model ecosystem and MindSpore's hardware acceleration. With just import mindnlp, you can run any HuggingFace model on Ascend NPU, NVIDIA GPU, or CPU - no code changes required.
import mindnlp # That's it! HuggingFace now runs on MindSpore
from transformers import pipeline
pipe = pipeline("text-generation", model="Qwen/Qwen2-0.5B")
print(pipe("Hello, I am")[0]["generated_text"])import mindspore
import mindnlp
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="Qwen/Qwen3-8B",
ms_dtype=mindspore.bfloat16,
device_map="auto"
)
messages = [{"role": "user", "content": "Write a haiku about coding"}]
print(pipe(messages, max_new_tokens=100)[0]["generated_text"][-1]["content"])import mindspore
import mindnlp
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"stable-diffusion-v1-5/stable-diffusion-v1-5",
ms_dtype=mindspore.float16
)
image = pipe("A sunset over mountains, oil painting style").images[0]
image.save("sunset.png")import mindnlp
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased")
inputs = tokenizer("MindNLP is awesome!", return_tensors="pt")
outputs = model(**inputs)
|
|
|
|
# From PyPI (recommended)
pip install mindnlp
# From source (latest features)
pip install git+https://github.com/mindspore-lab/mindnlp.git📋 Version Compatibility
| MindNLP | MindSpore | Python |
|---|---|---|
| 0.6.x | ≥2.7.1 | 3.10-3.11 |
| 0.5.x | 2.5.0-2.7.0 | 3.10-3.11 |
| 0.4.x | 2.2.x-2.5.0 | 3.9-3.11 |
| Feature | MindNLP | PyTorch + HF | TensorFlow + HF |
|---|---|---|---|
| HuggingFace Models | ✅ 200K+ | ✅ 200K+ | |
| Ascend NPU Support | ✅ Native | ❌ | ❌ |
| Zero Code Migration | ✅ | - | ❌ |
| Unified API | ✅ | ✅ | ❌ |
| Chinese Model Support | ✅ Excellent | ✅ Good |
- Instant Migration: Your existing HuggingFace code works immediately
- Ascend Optimization: Native support for Huawei NPU hardware
- Production Ready: Battle-tested in enterprise deployments
- Active Community: Regular updates and responsive support
MindNLP supports all models from HuggingFace Transformers and Diffusers. Here are some popular ones:
| Category | Models |
|---|---|
| LLMs | Qwen, Llama, ChatGLM, Mistral, Phi, Gemma, BLOOM, Falcon |
| Vision | ViT, CLIP, Swin, ConvNeXt, SAM, BLIP |
| Audio | Whisper, Wav2Vec2, HuBERT, MusicGen |
| Diffusion | Stable Diffusion, SDXL, ControlNet |
| Multimodal | LLaVA, Qwen-VL, ALIGN |
We welcome contributions! See our Contributing Guide for details.
# Clone and install for development
git clone https://github.com/mindspore-lab/mindnlp.git
cd mindnlp
pip install -e ".[dev]"Join the MindSpore NLP SIG (Special Interest Group) for discussions, events, and collaboration:
If you find MindNLP useful, please consider giving it a star ⭐ - it helps the project grow!
MindNLP is released under the Apache 2.0 License.
@misc{mindnlp2022,
title={MindNLP: Easy-to-use and High-performance NLP and LLM Framework Based on MindSpore},
author={MindNLP Contributors},
howpublished={\url{https://github.com/mindspore-lab/mindnlp}},
year={2022}
}Made with ❤️ by the MindSpore Lab team

