"There are no stupid questions, only stupid bugs that weren't caught at compile time."
Axiom.jl is a formally verified machine learning framework for Julia. It combines:
- Julia for high-level, expressive model definitions
- Zig for high-performance native inference
- Formal methods for mathematical guarantees about model behavior
An axiom is a fundamental truth that doesn't need proof - it's self-evident. In Axiom.jl:
- Your model definitions are axioms - declarative specifications
- Properties you prove become theorems - derived truths
- The framework ensures your models are mathematically sound
Also, it sounds cool.
Axiom.jl is currently in alpha (v0.1.0). It's suitable for:
- Research and experimentation
- Learning about verified ML
- Non-critical applications
For production safety-critical systems, wait for v1.0 or use additional validation.
| Feature | PyTorch/TensorFlow | Axiom.jl |
|---|---|---|
| Shape errors caught | Runtime | Compile time |
| Formal verification | No | Yes |
| Interactive REPL | No | Yes |
| Mathematical notation | Limited | Native |
| Proof certificates | No | Yes |
using Pkg
Pkg.add("Axiom")Julia 1.9 or higher. We recommend Julia 1.10 for best performance.
Not for basic usage. The Julia backend works without Zig.
For the high-performance Zig backend:
# Install Zig from https://ziglang.org/download/Then build:
cd ~/.julia/packages/Axiom/*/zig
zig build -Doptimize=ReleaseFastYes! Axiom.jl works on CPU. GPU support (CUDA, ROCm, Metal) is optional.
Yes, Axiom now includes first-class backend targets:
TPUBackend, NPUBackend, DSPBackend, FPGABackend.
Current status:
- Detection and backend selection APIs are available now.
- Compilation falls back safely to CPU when a coprocessor runtime is not present.
- Production-grade non-GPU kernels are still in progress.
Yes, with the following scope:
- REST: in-tree runtime server via
serve_rest(...) - GraphQL: in-tree runtime server via
serve_graphql(...) - gRPC: in-tree proto generation, in-process handlers, and network bridge server via
serve_grpc(...)(supportsapplication/grpcbinary unary protobuf andapplication/grpc+jsonbridge mode)
@axiom: Declarative, with type annotations and guarantees
@axiom Model begin
input :: Tensor{Float32, (784,)}
output = input |> Dense(784, 10) |> Softmax
@ensure valid_probabilities(output)
endSequential: Imperative, PyTorch-style
model = Sequential(
Dense(784, 10),
Softmax()
)Use @axiom for verified models, Sequential for quick prototyping.
struct MyLayer <: AbstractLayer
weight::Matrix{Float32}
bias::Vector{Float32}
end
function MyLayer(in_features::Int, out_features::Int)
weight = randn(Float32, in_features, out_features) * 0.01f0
bias = zeros(Float32, out_features)
MyLayer(weight, bias)
end
function forward(layer::MyLayer, x)
x * layer.weight .+ layer.bias'
end
parameters(layer::MyLayer) = (weight=layer.weight, bias=layer.bias)from_pytorch(...) supports both direct checkpoints and descriptor import:
model = from_pytorch("pretrained.pt") # requires python3 + torch
model = from_pytorch("pretrained.pytorch.json")ONNX export API is available in the currently supported subset:
to_onnx(model, "model.onnx", input_shape=(1, 3, 224, 224))# Freeze specific parameters
model.encoder.requires_grad = false
# Or during training
for param in parameters(model.encoder)
param .= param # Detach from gradient computation
endAnything that returns a boolean:
@ensure sum(output) ≈ 1.0
@ensure all(output .>= 0)
@ensure maximum(output) > 0.5
@ensure norm(weights) < 100
@ensure !any(isnan, output)Properties that can be derived from function definitions:
# These can be proven
@prove ∀x. sum(softmax(x)) == 1.0
@prove ∀x. relu(x) >= 0
@prove ∀x. 0 <= sigmoid(x) <= 1
# These need runtime checking
@prove ∀x. my_custom_function(x) > 0 # Can't prove, falls back to @ensureIf a property can't be proven, it becomes a runtime assertion:
⚠ Cannot prove: ∀x. custom_fn(x) > 0
Adding runtime assertion instead.
Your code still works, but you won't get compile-time guarantees.
result = verify(model, properties=[...], data=test_data)
if result.passed
cert = generate_certificate(model, result)
save_certificate(cert, "certificate.cert")
endInference: Yes, 2-3x faster with Zig backend Training: Comparable (Julia) to 1.5x faster (Zig)
# Compile for Zig
model = compile(my_model, backend=:zig)
# Or set environment variable
ENV["AXIOM_BACKEND"] = "zig"First run compiles Julia code. Subsequent runs are fast:
# First run: slow (compilation)
@time model(input) # 2.5s
# Second run: fast
@time model(input) # 0.01sUse precompile for production:
using Axiom
Axiom.precompile()using Profile
@profile for i in 1:100
model(input)
end
Profile.print()Yes, via Zygote.jl integration:
using Zygote
grads = gradient(m -> loss(m(x), y), model)function my_loss(pred, target)
mse_loss(pred, target) + 0.01f0 * sum(abs, weights)
end
train!(model, data, optimizer, loss_fn=my_loss)optimizer = Adam(lr=0.001f0)
scheduler = CosineAnnealingLR(optimizer, T_max=100)
for epoch in 1:100
train_epoch!(model, data, optimizer)
step!(scheduler, epoch)
end# Save
save_model(model, "model.axiom")
# Load
model = load_model("model.axiom")This is the most common error. Read the message carefully:
ERROR: Shape mismatch at Dense layer
Expected: Vector (1D)
Got: Tensor{Float32, (28, 28, 64)} (3D)
Solution: Add Flatten layer before Dense
Julia's type inference couldn't determine types. Add type annotations:
# Before (type unstable)
x = some_computation()
# After (type stable)
x::Matrix{Float32} = some_computation()Build the Zig backend:
cd ~/.julia/packages/Axiom/*/zig
zig build -Doptimize=ReleaseFast
export AXIOM_ZIG_LIB=$(pwd)/zig-out/lib/libaxiom_zig.soReduce batch size or use gradient checkpointing:
# Smaller batches
loader = DataLoader(data, batch_size=16) # Was 64
# Gradient checkpointing (roadmap item)
model = checkpoint(model, every=3) # Checkpoint every 3 layersSee CONTRIBUTING.md. We welcome:
- Bug reports
- Documentation improvements
- New layer implementations
- Verification methods
- Performance optimizations
Open an issue on GitHub with:
- Julia version
- Axiom.jl version
- Minimal reproducing example
- Expected vs actual behavior
See Roadmap Commitments for planned work and delivery stages, plus Vision for long-term direction.
Yes. The core DSL, verification pipeline, and SMT integration are Julia-native. The Zig backend is optional and only used when explicitly enabled.
- Discord - Real-time help
- GitHub Discussions - Community Q&A
- API Reference - Detailed documentation