Skip to content

Commit bf38ee5

Browse files
Bordaclaude
andcommitted
feat: add CompactMask for memory-efficient crop-RLE mask storage
Dense (N, H, W) bool masks cause OOM for aerial imagery (1000 objects x 4K image ~ 8.3 GB). CompactMask encodes each mask as a run-length sequence of its bounding-box crop, reducing typical usage to ~2 MB. - New `CompactMask` class with full duck-typed ndarray interface: `__getitem__`, `__array__`, `shape`, `dtype`, `area`, `sum`, `merge`, `with_offset` — drop-in compatible with existing `np.ndarray` masks. - Private row-major RLE helpers: `_rle_encode`, `_rle_decode`, `_rle_area`. - Phase 2 integration: `Detections` accepts CompactMask for `mask` field; `validate_mask`, `area` property, and `Detections.merge` all handle it. - Phase 3 optimised paths: `calculate_masks_centroids` uses crop-space arithmetic; `MaskAnnotator` paints crop regions directly; `move_detections` uses `with_offset` instead of materialising dense masks; `get_mask_size_category` uses `mask.area`. - 54 new tests (41 unit + 13 integration); all 17 doctests pass. - All 1190 existing tests pass; pre-commit hooks clean. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
1 parent d70bc4f commit bf38ee5

10 files changed

Lines changed: 1443 additions & 17 deletions

File tree

src/supervision/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,7 @@
4545
)
4646
from supervision.dataset.formats.coco import get_coco_class_index_mapping
4747
from supervision.dataset.utils import mask_to_rle, rle_to_mask
48+
from supervision.detection.compact_mask import CompactMask
4849
from supervision.detection.core import Detections
4950
from supervision.detection.line_zone import (
5051
LineZone,

src/supervision/annotators/core.py

Lines changed: 24 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
from functools import lru_cache
44
from math import sqrt
5-
from typing import Any, overload
5+
from typing import Any, cast, overload
66

77
import cv2
88
import numpy as np
@@ -434,6 +434,13 @@ def annotate(
434434

435435
colored_mask = np.array(scene, copy=True, dtype=np.uint8)
436436

437+
from supervision.detection.compact_mask import CompactMask
438+
439+
compact_mask = (
440+
cast(CompactMask, detections.mask)
441+
if isinstance(detections.mask, CompactMask)
442+
else None
443+
)
437444
for detection_idx in np.flip(np.argsort(detections.area)):
438445
color = resolve_color(
439446
color=self.color,
@@ -443,8 +450,22 @@ def annotate(
443450
if custom_color_lookup is None
444451
else custom_color_lookup,
445452
)
446-
mask = np.asarray(detections.mask[detection_idx], dtype=bool)
447-
colored_mask[mask] = color.as_bgr()
453+
if compact_mask is not None:
454+
# Paint only the bounding-box crop — avoids a full (H, W) alloc.
455+
x1 = int(compact_mask._offsets[detection_idx, 0])
456+
y1 = int(compact_mask._offsets[detection_idx, 1])
457+
crop_h = int(compact_mask._crop_shapes[detection_idx, 0])
458+
crop_w = int(compact_mask._crop_shapes[detection_idx, 1])
459+
crop_m = compact_mask.crop(detection_idx)
460+
colored_mask[y1 : y1 + crop_h, x1 : x1 + crop_w][crop_m] = (
461+
color.as_bgr()
462+
)
463+
else:
464+
mask = np.asarray(
465+
detections.mask[detection_idx],
466+
dtype=bool,
467+
)
468+
colored_mask[mask] = color.as_bgr()
448469

449470
cv2.addWeighted(
450471
colored_mask, self.opacity, scene, 1 - self.opacity, 0, dst=scene

0 commit comments

Comments
 (0)