| layout | default |
|---|---|
| title | II - Cognitive Modules |
| subtitle |
- Sensor codelets read simulator data.
- Build bottom-up (BU) and top-down (TD) feature maps → merge into CFM.
- Convert CFM into a Salience Map (guides focus).
- Winner module selects the region of attention.
- DecisionMaking + IoR turn focus into actions, avoiding repetition.
- Code examples from the
attention_trailrepository.
- A Mind holds Codelets and MemoryObjects (MOs).
- Each codelet runs
proc(): reads from MOs, writes to MOs. - Complex behavior emerges from many small, concurrent codelets.
- In this session:
sensors → sensor MOs → perception → feature maps MOs → CFM → CFM MO → attention → attention MOs
- Classes ending in
...vreppull images/depth from CoppeliaSim via the Remote API. - Tutorial includes:
training_obj.tttscene- Instructions to run in NetBeans + CoppeliaSim
-
Sensor_Vision
- Reads RGB frames from the source.
- Publishes them to a vision MO.
-
Sensor_Depth
- Reads a depth frame.
- Time-aligns it with vision.
- Publishes to a depth MO.
-
Sensor_ColorRed / Green / Blue
- Minimal channel-specific readers.
- Prepare per-channel data for downstream processing.
✅ Vision and depth MOs should update and synchronize before moving to perception.
- CFM = weighted sum of bottom-up + top-down maps.
- Track BU vs. TD contributions each cycle (BU-driven or TD-driven).
- Salience Map = CFM + current Attentional Map.
- Winner-takes-all picks the most salient region.
- IoR (Inhibition of Return) suppresses that region → shifts attention.
- Today’s focus: high-quality sensor data + feature maps for reliable salience.
Top-down maps encode what the agent currently wants.
They compare the sensed scene to a desired target:
-
Desired Color (goal RGB)
- →
TD_FM_Colorhighlights regions closest to the target color.
- →
-
Desired Distance (goal depth)
- →
TD_FM_Depthhighlights regions matching the target range.
- →
👉 These maps are goal-driven and shift attention when you change target values.
- Selects focus from Salience (and optional disSalMap).
- Computes:
argmax region+ confidence- Tie-breaking & hysteresis
- Outputs:
- winner index
- region coordinates
- score → current attention decision
-
DecisionMaking maps winner → agent/simulator actions:
- “look at (r,c)”
- “move gripper to (x,y)”
- “center camera”
-
Also updates:
- IoR mask (to prevent repetition).
- Logs action with timestamp + confidence.
- No goal → salience driven by strong bottom-up features.
- With goal → CFM + salience bias toward goal-consistent regions.
- IoR prevents repetition → smoother exploration.
- Stable salience distribution.
- Clear winner transitions.
- Actions aligned with the current winner.
- Access
attention_trailpresent inFile System, open a terminal and change to branch tutorial.
git checkout tutorial
- Copy the
libfolder present in1_CSTCore/1_MIMoCoreModel - Paste folder into
attention_trailpresent inFile System
- In VNC access
sharevncfolder and then folderCoppelia - Access folder for the version 4.9
CoppeliaSim_Pro_V4_9_0_rev6_Ubuntu22_04 - Open a terminal and run script
CoppeliaSim.sh - Open scene present in attention_trail/scenes
-
Build:
javac -cp 'lib/*' -d build $(find src/main/java -name "*.java") -
Run:
java -cp "build:lib/*" cst_attmod_app.CST_AttMod_App
- CST should start the simulation on Coppelia and start collecting data
- Perceptual maps should be build
- Attention maps should be calculated with Colombini model