spec(manipulation/memory): object memory tracker on memory2, propose …#2067
Draft
jhengyilin wants to merge 4 commits into
Draft
spec(manipulation/memory): object memory tracker on memory2, propose …#2067jhengyilin wants to merge 4 commits into
jhengyilin wants to merge 4 commits into
Conversation
…architecture design for integrating memory2 with current manipulation stack
…th manipulation stack
Author
|
Memory2-native perception (Refs #1893) Goal: give the agent natural-language access to its workspace — find, recall, and manipulate objects without a fixed class list, with the tracker maintaining identity and lifecycle automatically. flowchart TB
camera([camera])
subgraph m2 ["memory2"]
direction TB
semsearch["SemanticSearch<br/><i>continuous CLIP, brightness/sharpness filtered</i>"]
subgraph db ["recording.db"]
direction LR
color[("color_image")]
depth[("depth_image")]
info[("camera_info")]
embedded[("color_image_embedded")]
obs[("object_observations")]
events[("object_events")]
end
end
recorder["RGBDCameraRecorder"]
lazy["LazyPerceptionModule<br/><i>@skill find_objects(prompts)</i><br/>+ startup scan + 10s heartbeat"]
tracker["ObjectMemoryTracker<br/><i>identity + lifecycle</i>"]
manip["PickAndPlaceModule<br/><i>manipulation — no API change</i>"]
recall["@skill recall(name)"]
camera --> recorder
recorder ==> color
recorder ==> depth
recorder ==> info
color -. "auto-subscribe" .-> semsearch
semsearch ==> embedded
embedded -. "pulled on trigger" .-> lazy
depth -. ".at(peak.ts)" .-> lazy
info -. ".last()" .-> lazy
lazy ==>|"list[Object]"| tracker
tracker ==> obs
tracker ==> events
tracker ==>|"tracked_objects"| manip
tracker ==>|"watched_names: set[str]"| lazy
events --> recall
classDef stream fill:#fef3c7,stroke:#d97706,stroke-width:2px
classDef module fill:#dbeafe,stroke:#2563eb,stroke-width:2px
classDef external fill:#f3f4f6,stroke:#6b7280,stroke-width:1px
class color,depth,info,embedded,obs,events stream
class recorder,lazy,semsearch,tracker,recall,manip module
class camera external
What this unlocks (agent-facing API)
Once any object enters the tracker (via agent call, startup scan, or heartbeat), the tracker's How it works — v3 workflow
Why this design
|
…or agent to query on memory2
Author
|
Memory2-native perception for manipulation (Refs #1893) flowchart TB
camera([camera])
subgraph m2 ["memory2"]
direction TB
semsearch["SemanticSearch<br/><i>continuous CLIP, brightness/sharpness filtered</i>"]
subgraph db ["recording.db"]
direction LR
color[("color_image")]
depth[("depth_image")]
info[("camera_info")]
embedded[("color_image_embedded")]
end
end
recorder["RGBDCameraRecorder"]
lazy["LazyPerceptionModule<br/><i>skills: find_objects · find_objects_near · recall</i>"]
manip["PickAndPlaceModule<br/><i>manipulation — reads latest_detections</i>"]
camera --> recorder
recorder ==> color
recorder ==> depth
recorder ==> info
color -. "auto-subscribe" .-> semsearch
semsearch ==> embedded
embedded -. ".search → .filter → .order_by(ts).first()" .-> lazy
depth -. ".at(obs.ts)" .-> lazy
info -. ".last()" .-> lazy
lazy ==>|"latest_detections: list[Object]"| manip
classDef stream fill:#fef3c7,stroke:#d97706,stroke-width:2px
classDef module fill:#dbeafe,stroke:#2563eb,stroke-width:2px
classDef external fill:#f3f4f6,stroke:#6b7280,stroke-width:1px
class color,depth,info,embedded stream
class recorder,lazy,semsearch,manip module
class camera external
What this unlocks (agent-facing API)Three skills, each a one-line composition of memory2 primitives. Every skill returns the most recent confident match along with its timestamp.
How it works — walkthrough
Why this design
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Spec for the object memory tracker on memory2 (Refs #1893)
flowchart TB camera([camera]) perception["ObjectSceneRegistrationModule<br/><i>detection only — no ObjectDB</i>"] tracker["ObjectMemoryTracker<br/><i>(MemoryModule)</i>"] manipulation["PickAndPlaceModule<br/><i>manipulation — no API change</i>"] skills["@skill recall(name)<br/><i>cross-session memory</i>"] camera --> perception perception -->|"raw detections<br/>list[DetObject]"| tracker tracker -->|"tracked_objects (port)<br/>list[DetObject]"| manipulation subgraph m2 ["memory2 — source of truth"] direction TB obs[("object_observations<br/><i>dense — log</i>")] events[("object_events<br/><i>sparse — lifecycle</i>")] end tracker == "append + inline cache update" ==> obs tracker == "append + inline cache update" ==> events obs -. "sync .to_list() replay on start()" .-> tracker events -. "sync .to_list() replay on start()" .-> tracker events --> skills classDef stream fill:#fef3c7,stroke:#d97706,stroke-width:2px classDef module fill:#dbeafe,stroke:#2563eb,stroke-width:2px classDef external fill:#f3f4f6,stroke:#6b7280,stroke-width:1px class obs,events stream class perception,tracker,manipulation module class camera,skills externalHow it works — Propose architecture workflow
(0.4, 0.1, 0.9)APPEAREDevent + observation.confidence = 1.0PROMOTED. Cup is intracked_objects.confidence ≈ 0.77— still confidentconfidence ≈ 0.51— borderlineconfidence ≈ 0.41— tentative. Out of snapshot, still match-eligible.(1.0, 0.5, 0.9)— 70cm awayMOVEDevent. No phantom at old position.confidence < 0.1→LOSTevent. Cup moves to recently-lost bucket.stream.to_list()over both streams) rebuilds the cache from memory2 before the tracker accepts new detections. No bespoke load code.recall("cup")events.tags(name="cup").last()→ returns LOST event. Process answers about a cup it never saw in its own lifetime.Why this design:
Two streams in memory2 —
object_observations(every matched detection — the evidence) andobject_events(lifecycle transitions APPEARED / PROMOTED / LABEL_CHANGED / MOVED / LOST — the story).Continuous belief over binary present/absent — one tunable (
time_constant_s = 15) controls how forgiving the tracker is of occlusion. The tentative band (0.2 – 0.5) keeps mid-confidence objects match-eligible, so a single missed scan can't create a duplicate identity.Memory2 holds the persistent record — object history lives in the streams across sessions. The tracker reads from memory2 on startup, so cross-session memory comes for free.
No change to manipulation's API —
tracked_objectspublisheslist[Object](same type used today).PickAndPlaceModulework without modification.Solves the two issues we discussed: