Releases: autogluon/autogluon-assistant
v1.1.0
AutoGluon Assistant v1.1.0
What's New
AutoGluon-Assistant v1.1.0 brings major architectural improvements including MCTS-based solution search, meta prompting, chat mode, and enhanced agent customization capabilities.
MCTS-Based Solution Search
Replaced the previous manager with a Node-Based Manager powered by Monte Carlo Tree Search (MCTS) for intelligent solution exploration. This enables more systematic and effective search over candidate ML pipelines.
Meta Prompting
Introduced meta prompting to dynamically generate and refine prompts, improving the quality of LLM-driven code generation and planning across diverse ML tasks.
Chat Mode for Conversational Q&A
Added an interactive Chat Mode that allows users to have conversational Q&A sessions, making it easier to iteratively refine tasks and explore results.
Continuous Improvement
Enabled continuous improvement capabilities, allowing MLZero to iteratively refine solutions across multiple iterations for better performance.
Agent Customization
- Variable Registration: Support for registering custom variables for agent customization, enabling more flexible pipeline configuration.
- Data Visualization Agents: New visualization agents with a tutorial on how to customize agents for specific use cases.
Sagemaker LLM Provider Support
Added support for Amazon SageMaker as an LLM provider, expanding deployment options alongside existing Bedrock and OpenAI support.
Separate Environment for Each ML Library
ML library executions now run in isolated environments, preventing dependency conflicts between different libraries and improving stability.
Dockerfile for MLZero
Added an official Dockerfile for containerized deployment of MLZero.
Improvements
- Optimize Per Iteration Output Saving/Removing Logics - Reduced disk usage and improved iteration management (#255) - @FANGAreNotGnu
- Improve Benchmarking Performance - Enhanced benchmarking tools and performance (#222) - @FANGAreNotGnu
- Update Tutorials and Improve CLI - Refreshed tutorials and CLI experience (#236) - @FANGAreNotGnu
- Advanced Settings for WebUI - Added advanced configuration options in the WebUI (#237) - @boranhan
- Sphinx Docs with Automated CI/CD Deployment - Automated documentation builds and deployment (#233) - @tonyhoo
- Update Python Version Requirement - Now requires Python 3.10 - 3.12 (#239) - @FANGAreNotGnu
- Update AutoGluon to 1.4.0 (#229) - @FANGAreNotGnu
- MCP File Upload and WebUI Improvements (#223) - @HuawenShen
- MCP and WebUI CI/CD Workflows (#226, #227) - @HuawenShen
- Add Abalone Dataset as Example (#258) - @FANGAreNotGnu
Bug Fixes
- Fix MultiTurn Context Bug - Resolved context handling issues in multi-turn conversations (#254) - @FANGAreNotGnu
- Resolve Test Race Conditions (#230) - @HuawenShen
Security
- Scope Down GitHub Token Permissions - Tightened CI/CD token permissions for improved security (#248) - @adnanhkhan
- Change IAM Role to Assistant-Specific Role (#235) - @tonyhoo
Getting Started
AutoGluon Assistant v1.1.0 requires Python 3.10 - 3.12 and is available on Linux.
pip install autogluon.assistant==1.1.0or install with uv (recommended):
pip install uv
uv pip install autogluon.assistant==1.1.0Contributors
We thank the following contributors for their work on AutoGluon Assistant v1.1.0:
- @FANGAreNotGnu (Haoyang Fang)
- @HuawenShen
- @boranhan
- @tonyhoo
- @adnanhkhan
v1.0.0
🚀 AutoGluon Assistant v1.0.0 “MLZero”
What’s New
We are excited to present the AutoGluon-Assistant 1.0 release. Level up from v0.1: v1.0 expands beyond tabular data to robustly support any and many modalities, including image, text, tabular, audio and mixed-data pipelines. This aligns precisely with the MLZero vision of comprehensive, modality-agnostic ML automation.
Official MLZero Launch
AutoGluon Assistant v1.0 is now synonymous with "MLZero: A Multi-Agent System for End-to-end Machine Learning Automation" (arXiv:2505.13941), the end-to-end, zero-human-intervention AutoML agent framework for multimodal data.
Built on a novel multi-agent architecture using LLMs, MLZero handles perception, memory (semantic & episodic), code generation, execution, and iterative debugging — seamlessly transforming raw multimodal inputs into high-quality ML/DL pipelines.
Why It Matters
- No-code: Users define tasks purely through natural language ("classify images of cats vs dogs with custom labels"), and MLZero delivers complete solutions with zero manual configuration or technical expertise required.
- Built on proven foundations: MLZero generates code using established, high-performance ML libraries rather than reinventing the wheel, ensuring robust solutions while maintaining the flexibility to easily integrate new libraries as they emerge.
- Research-grade performance: MLZero is extensively validated across 25 challenging tasks spanning diverse data modalities, MLZero outperforms the competing methods by a large margin with a success rate of 0.92 (+263.6%) and an average rank of 2.28.
| Dataset | Ours | Codex CLI | AIDE | DS-Agent | AK |
|---|---|---|---|---|---|
| Avg. Rank ↓ | 2.42 | 8.04 | 5.76 | 6.16 | 8.26 |
| Rel. Time ↓ | 1.0 | 0.15 | 0.23 | 2.83 | N/A |
| Success ↑ | 92.0% | 14.7% | 69.3% | 25.3% | 13.3% |
- Modular and extensible architecture: We separate the design and implementation of each agent and prompts for different purposes, with a centralized manager coordinating them. This makes adding or editing agents, prompts, and workflows straightforward and intuitive for future development.
Brand-new WebUI and MCP
We’re also excited to introduce the newly redesigned WebUI in v1.0, now with a streamlined chatbot-style interface that makes interacting with MLZero intuitive and engaging:
-
Upload & Describe: Drag your data folder into the chat input, then simply type your task (e.g., "train a classifier for churn prediction"). Whether uploading a ZIP of CSV files or typing instructions, the WebUI transforms complex AutoML workflows into a ML solution.
-
Configure: Easily set your model provider and credentials via the Settings panel.
-
Live log: Watch real-time logs from data perception to planning and execution, all visualized directly in the browser.
-
Visualization and downloadable results: View key outputs and download trained models, prediction results and generated codes with a single click.
Furthermore, we’re also bringing MCP (Model Control Protocol) integration to MLZero, enabling seamless remote orchestration of AutoML pipelines through a standardized protocol:
-
Distributed deployment: Run your ML backend on powerful EC2 instances while controlling it from your local machine — or keep everything local for development.
-
LLM-ready tools: Expose AutoML capabilities as MCP tools that any LLM can understand and execute, from Claude to GPT-4 to open-source models.
-
Natural language control: Connect Bedrock, OpenAI, or any LLM provider to orchestrate complex ML workflows through conversational interfaces.
-
Transparent pipeline: Watch as your prompts transform into uploaded data, running tasks, and downloaded results — all through a single
run_autogluon_pipelinetool. -
Flexible architecture: Deploy servers across machines, configure tunneling as needed, or run everything locally — MCP adapts to your infrastructure.
This MCP integration transforms MLZero into a universally accessible ML service — turning any LLM into your personal AutoML assistant.
Getting Started
AutoGluon Assistant is supported on Python 3.8 - 3.11 and is available on Linux (will fix dependency issues for MacOS and Windows by our next official release).
pip install autogluon.assistantor install with uv (recommend):
pip install uv
uv pip install autogluon.assistant==1.0To use CLI:
mlzero -i <input_data_dir>To start the webUI:
mlzero-backend # command to start backend
mlzero-frontend # command to start frontend on 8509 (default)To start the MCP:
- Start the server
cd autogluon-assistant
mlzero-backend # command to start backend
bash ./src/autogluon/mcp/server/start_services.sh # This will start the service—run it in a new terminal.- Start the client
cd autogluon-assistant
python ./src/autogluon/mcp/client/server.pyIf you use Autogluon Assistant (MLZero) in your research, please cite our paper:
@misc{fang2025mlzeromultiagentendtoendmachine,
title={MLZero: A Multi-Agent System for End-to-end Machine Learning Automation},
author={Haoyang Fang and Boran Han and Nick Erickson and Xiyuan Zhang and Su Zhou and Anirudh Dagar and Jiani Zhang and Ali Caner Turkmen and Cuixiong Hu and Huzefa Rangwala and Ying Nian Wu and Bernie Wang and George Karypis},
year={2025},
eprint={2505.13941},
archivePrefix={arXiv},
primaryClass={cs.MA},
url={https://arxiv.org/abs/2505.13941},
}We also thank the following contributors for their valuable discussions and feedback throughout the development of AutoGluon Assistant 1.0 (in alphabetical order):
v0.1.0
Version 0.1.0 🎉
👋 Hello World! The AutoGluon team is happy to announce the release of AutoGluon-Assistant. This is the initial release of the Assistant module and enables users to solve tabular machine learning problems using only natural language descriptions. ✨
🚀 Don't wait to try it out and we are happy to any feedback!
🤖 Currently, AutoGluon-Assistant supports AWS Bedrock as default LLM provider while also supporting OpenAI as an alternative.
📚 To learn more, check out our tutorial
🙌 This release contains 134 commits from 7 contributors.
👥 Full Contributor List (ordered by # of commits):
This version supports Python versions 3.8 to 3.11.
