This is an introduction to developing within the MAX open source project. If you plan to contribute changes back to the repo, first read everything in CONTRIBUTING.md.
If you just want to build with MAX and aren't interested in developing in the source code, instead see the MAX quickstart guide.
First, make sure your system meets the MAX system
requirements.
The same requirements that apply to the modular package apply to developing
in this repo.
In particular, if you're on macOS, make sure you have Metal utilities (for GPU
programming in recent versions of Xcode)—try xcodebuild -downloadComponent MetalToolchain.
Then you can get started:
-
Fork the repo, clone it, and create a branch.
-
Optionally, install
pixi. We use it in our code examples for package management and virtual environments.curl -fsSL https://pixi.sh/install.sh | sh -
Optionally, install the Mojo extension in VS Code or Cursor.
That's it.
The build system uses Bazel, and if you don't have it,
the bazelw script in the next step installs it.
From the repo root, run this bazelw command to run all the MAX tests:
./bazelw test //max/...If it's your first time, it starts by installing the Bazel version manager, Bazelisk, which then installs Bazel.
You can run all the tests within a specific subdirectory by simply
specifying the subdirectory and using /.... For example:
./bazelw test //max/tests/integration/API/python/graph/...
./bazelw test //max/tests/tests/torch/...To find all the test targets, you can run:
./bazelw query 'tests(//max/tests/...)'When developing a new model architecture, or testing MAX API changes against existing models, you can use the following Bazel commands to run inference.
Note
Some models require Hugging Face authentication to load model weights, so you should set your HF access token as an environment variable:
export HF_TOKEN="hf_..."For example, this entrypoints:pipelines generate command is equivalent to
running inference with max generate:
./bazelw run //max/python/max/entrypoints:pipelines -- generate \
--model OpenGVLab/InternVL3-8B-Instruct \
--prompt "Hello, world!"And this is equivalent to creating an endpoint with max serve:
./bazelw run //max/python/max/entrypoints:pipelines -- serve \
--model OpenGVLab/InternVL3-8B-Instruct \
--trust-remote-codeHere are some docs to help start developing in the MAX framework:
- Contributing new model architectures
- Benchmarking a MAX endpoint
- Benchmarking Mojo kernels with
kbench - Kernel profiling with Nsight Compute
- Contributing changes to the repo
For more documentation, see docs.modular.com.