Skip to content
@mpsops

mpsops

MPS Ops

High-performance PyTorch operators for Apple Silicon (M1/M2/M3/M4).

Packages

Package Description Install
mps-flash-attn Flash Attention with O(N) memory pip install mps-flash-attn
mps-bitsandbytes 8-bit quantization (INT8/FP8) pip install mps-bitsandbytes
mps-deform-conv Deformable Convolution 2D pip install mps-deform-conv
mps-conv3d 3D Convolution pip install mps-conv3d
mps-carafe CARAFE content-aware upsampling pip install mps-carafe
mps-correlation Correlation layer for optical flow pip install mps-correlation

Or install all at once:

pip install mpsops

Why?

PyTorch's MPS backend lacks many optimized operators that CUDA has. We bridge that gap with native Metal implementations, enabling models that would otherwise fail on Apple Silicon.

Popular repositories Loading

  1. mps-flash-attention mps-flash-attention Public

    Python 2 1

  2. mps-bitsandbytes mps-bitsandbytes Public

    8-bit quantization for PyTorch on Apple Silicon (M1/M2/M3/M4)

    Python 2

  3. mps-conv3d mps-conv3d Public

    3D Convolution for Apple Silicon (MPS)

    Python 2

  4. mps-deform-conv mps-deform-conv Public

    Deformable Convolution 2D for PyTorch on Apple Silicon (MPS)

    Objective-C++ 1

  5. mps-linear-attention mps-linear-attention Public

    Metal kernels for Flash Linear Attention (DeltaNet/Qwen3.5) on Apple Silicon MPS

    Objective-C++ 1

  6. mps-correlation mps-correlation Public

    Correlation layer for optical flow on Apple Silicon (MPS)

    Objective-C++

Repositories

Showing 10 of 11 repositories

Top languages

Loading…

Most used topics

Loading…