A trusted list of reputable GGUF models sourced from Docker ai/* repositories.
Value proposition:
- Offer a curated, trusted list of GGUF models surfaced from Docker Hub.
- Provide GGUF downloads where corporate network restrictions may block Ollama, Hugging Face, or ModelScope.
- Automatically identify GGUF blobs to simplify importing into Ollama or llama.cpp (gguf_dump / llama-gguf metadata).
Interactive script to download GGUF AI models via Docker and import them into Ollama.
- Bash (macOS ships Bash 3.2)
- Docker Desktop (required) must be installed and running
- jq (used to parse the Docker Hub API)
- gguf_dump (from llama.cpp) or llama-gguf must be installed and on PATH; used to identify GGUF metadata
- Ollama (optional) to run the downloaded models
Run with a single command:
bash <(curl -s https://raw.githubusercontent.com/Enelass/GGUF_Model_Downloader/refs/heads/main/download_docker_model.sh)- Changelog:
CHANGELOG.md - Release process:
RELEASING.md
- Fetches an up-to-date list of Docker Hub
ai/*models every run - Browse dozens of AI models (Qwen, DeepSeek, Gemma, LLaMA, Mistral, etc.)
- Interactive menu with arrow key navigation
- Automatic GGUF file detection
- Ready-to-use Ollama import commands
- Arrow keys: Navigate pages
- Number + Enter: Select model
- q: Quit
That's it. Run the script, pick a model, and follow the on-screen instructions.


