Skip to content

daseinpbc/SPL-FRAMEWORK

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

121 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Subsumption Pattern Learning (SPL) Framework

License: MIT Python 3.8+ Model Context Protocol arXiv Cost Reduction: 13.9x

A hierarchical multi-agent framework that transforms collections of autonomous AI agents into a self-distilling swarm intelligence through shared collective memory.

SPL adapts Brooks' subsumption architecture from behavioral robotics to foundation model economics, implementing a formally-defined three-layer hierarchy (Reactive, Tactical, Deliberative) where learned patterns are distilled into a centralized Shared State via explicit inhibition signals. Interactive demo: https://spl-demo.vercel.app/

Paper: Subsumption Pattern Learning: A Formal Framework for Self-Distilling Swarm Intelligence Through Shared Collective Memory (Cuce, 2026)


πŸ“Š Key Results

Metric SPL vs. Monolithic LLM vs. FrugalGPT
Cost (100K tasks) $89.47 13.9Γ— reduction 3.2Γ— reduction
Latency (median) 7ms 22Γ— faster 4Γ— faster
Accuracy 96.9% -1.3% -0.5%
Suppression Rate 94.5% β€” β€”

Multi-agent swarm learning achieves an additional 42% reduction in foundation model escalations.


🎯 The Isolated Agent Problem

Modern LLM-based agents operate as isolated computational units, each invoking expensive foundation models independently without mechanisms for inter-agent learning or knowledge reuse. This isolation contradicts four decades of insights from behavioral robotics, swarm biology, and organizational psychology.

The economic consequences are significant:

  • Reasoning models generate 5Γ— more tokens per request
  • Multi-step agentic workflows compound costs further
  • Daily costs can reach thousands of dollars
  • Costs remain constant even as agents repeatedly solve nearly identical problems

✨ The SPL Solution

SPL unifies three previously disparate research streams:

  1. Subsumption Architecture (Brooks, 1986): Layered behavioral control where simpler reactive modules suppress more complex deliberative ones
  2. Social Learning Theory (Bandura, 1977): Collectives outperform individuals when knowledge is effectively shared
  3. Swarm Intelligence (Kennedy & Eberhart, 2001): Decentralized systems with shared environmental state solve optimization problems through local interactions

Three-Layer Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     Incoming Request x                          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                              ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  LAYER 0: Reactive / Structural Validation                      β”‚
β”‚  ─────────────────────────────────────────                      β”‚
β”‚  Lβ‚€(x) = (ERROR, e) if Β¬valid(x), else (PASS, x)               β”‚
β”‚  Cost: $0  |  Latency: <1ms  |  Deterministic checks            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                              ↓ Iβ‚€ = false
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  LAYER 1: Tactical / Pattern Matching                           β”‚
β”‚  ─────────────────────────────────────                          β”‚
β”‚  L₁(x) = (MATCH, ψ_p*(x)) if βˆƒp*: Ο†_p*(x) β‰₯ ΞΈ ∧ complexity(x) ≀ Ξ±β”‚
β”‚  Cost: ~$0.0001  |  Latency: <10ms  |  Pattern library lookup   β”‚
β”‚                                                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚  Inhibition Signal: I₁(x) = true β†’ SUPPRESS Layer 2      β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                              ↓ I₁ = false (escalate)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  LAYER 2: Deliberative / Foundation Model Reasoning             β”‚
β”‚  ─────────────────────────────────────────────────              β”‚
β”‚  Lβ‚‚(x) = (SOLVED, L(x), distill(L, x))                         β”‚
β”‚  Cost: $0.01-$0.10  |  Latency: 100-500ms  |  LLM inference     β”‚
β”‚                                                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚  Pattern Distillation: New patterns β†’ Shared State       β”‚   β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ”¬ Formal Framework

Definition 1: SPL Agent

An SPL agent is a tuple A = (P_local, S, ΞΈ, Ξ±, L) where:

  • P_local: Agent's local pattern set
  • S: Reference to the shared collective memory
  • ΞΈ ∈ (0, 1): Confidence threshold for Layer 1 suppression
  • Ξ± ∈ ℝ⁺: Complexity threshold
  • L : X β†’ Y: Layer 2 foundation model

Definition 2: Pattern

A pattern p = (Ο†_p, ψ_p, ΞΊ_p) consists of:

  • Ο†_p : X β†’ [0, 1]: Matcher returning match confidence
  • ψ_p : X β†’ Y: Responder producing outputs for matched inputs
  • ΞΊ_p ∈ ℝ⁺: Complexity bound

Definition 3: Inhibition Signal

The Layer 1 inhibition signal I₁ : X β†’ {true, false}:

I₁(x) = true   if max_{p∈P_e} Ο†_p(x) β‰₯ ΞΈ ∧ complexity(x) ≀ Ξ±
        false  otherwise

When I₁(x) = true, Layer 2 execution is suppressed.

Definition 4: Suppression Rate

ρ = |{x ∈ X_test : I₁(x) = true}| / |X_test|

πŸ“¦ Shared State Protocol

The Shared State S serves as the swarm's collective memory, enabling stigmergic coordination across agents.

Structure

S = (P_shared, C, M, A) where:

  • P_shared: Global pattern library
  • C : P_shared β†’ [0, 1]: Pattern β†’ confidence scores
  • M : P_shared β†’ β„•: Pattern β†’ match counts (reinforcement)
  • A : P_shared β†’ AgentID: Pattern provenance tracking

Confidence Update Rules

Reinforcement (successful match):

C'(p) = C(p) + Ξ·(1 - C(p))

Decay (incorrect response):

C'(p) = C(p) Β· (1 - Ξ΄)

This implements stigmergic reinforcement: successful patterns accumulate confidence like pheromone trails, while failed patterns decay.

Multi-Agent Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Agent A   β”‚     β”‚   Agent B   β”‚     β”‚   Agent C   β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€     β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€     β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Layer 0    β”‚     β”‚  Layer 0    β”‚     β”‚  Layer 0    β”‚
β”‚  Layer 1    β”‚     β”‚  Layer 1    β”‚     β”‚  Layer 1    β”‚
β”‚  Layer 2    β”‚     β”‚  Layer 2    β”‚     β”‚  Layer 2    β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
       β”‚ Write              β”‚ Read/Write        β”‚ Write
       ↓                    ↓                   ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              SHARED STATE (Collective Memory)        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  Learned Patterns  β”‚  Confidence  β”‚  Cross-Agent    β”‚
β”‚  P_shared          β”‚  Scores C(p) β”‚  Markers M(p)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                           ↓
              Emergent Swarm Intelligence

πŸ“ˆ Intelligence Compounding Theory

Theorem (Intelligence Compounding)

Under mild assumptions, collective competency satisfies:

Ξ“(n) = 1 - e^(-πμn/k)

where:

  • n: Number of processed requests
  • Ο€: Probability a novel input yields a distillable pattern
  • ΞΌ: Measure of input space covered by each pattern
  • k: Coverage constant

Corollary (Logarithmic Learning)

To achieve competency Ξ“*, the swarm requires:

n* = (k/πμ) Β· ln(1/(1 - Ξ“*))

Key insight: Multi-agent systems amplify this effectβ€”if m agents share state, the effective rate is m Β· Ο€, reducing time to competency by factor m.


πŸš€ Quick Start

Installation

# Clone repository
git clone https://github.com/daseinpbc/SPL-FRAMEWORK.git
cd SPL-FRAMEWORK

# Install dependencies
pip install -r requirements.txt

# For multi-agent shared state
pip install redis

Basic Usage

from spl import SPLAgent

# Initialize agent with formal parameters
agent = SPLAgent(
    theta=0.87,      # Confidence threshold (ΞΈ)
    alpha=0.6,       # Complexity threshold (Ξ±)
    eta=0.1,         # Learning rate (Ξ·)
    delta=0.05       # Decay rate (Ξ΄)
)

# Add patterns to Layer 1 (P_local)
agent.layer1.add_pattern(
    name='urgent',
    matcher=r'urgent|asap|emergency',  # Ο†_p
    responder='urgent',                 # ψ_p
    confidence=0.95                     # Initial C(p)
)

# Process request
result = agent.process({
    'user_id': 'user123',
    'content': 'URGENT: Server outage in production'
})

print(result)
# {
#   'result': 'urgent',
#   'layer': 1,                    # Handled by Layer 1
#   'cost': 0.0001,
#   'confidence': 0.95,
#   'inhibition': True,            # I₁(x) = true
#   'suppressed_layer2': True      # Layer 2 NOT invoked
# }

Multi-Agent Swarm Configuration

from spl import SPLAgent, SharedState
import redis

# Initialize shared state (collective memory)
redis_client = redis.Redis(host='localhost', port=6379)
shared_state = SharedState(
    client=redis_client,
    theta_inherit=0.75,    # Inheritance threshold
    sync_interval=100      # ms
)

# Create swarm of agents sharing state
agents = [
    SPLAgent(shared_state=shared_state, agent_id=f'agent_{i}')
    for i in range(5)
]

# When Agent A learns a pattern...
agents[0].process({'content': 'Complex query requiring Layer 2...'})

# ...Agents B-E automatically inherit it via Shared State
# Future similar queries resolved at Layer 1 (zero FM cost)

MCP Integration (Foundation Model Agnostic)

from spl import SPLAgent
from spl.mcp_integration import MCPClient
import anthropic

# Layer 2 can use any foundation model via MCP
client = anthropic.Anthropic()
layer2_mcp = MCPClient(
    model="claude-sonnet-4-20250514",
    api_client=client,
)

agent = SPLAgent()
agent.layer2 = layer2_mcp

# Automatic pattern distillation from Layer 2 responses
result = agent.process({
    'content': 'Novel query requiring deliberative reasoning...'
})
# New pattern extracted and added to Shared State

πŸ“Š Experimental Results

Benchmark: 100,000 Heterogeneous Enterprise Tasks

Dataset composition:

  • Email Classification: 40,000 tasks
  • Customer Inquiry Resolution: 35,000 tasks
  • Data Pipeline Orchestration: 25,000 tasks

Single-Agent Performance

System Cost (USD) Latency (ms) Accuracy Suppression Rate
Monolithic LLM $1,247.32 847 Β± 312 98.2% 0.0%
FrugalGPT $312.18 523 Β± 287 97.4% β€”
RouteLLM $287.45 498 Β± 264 97.1% β€”
SPL (Ours) $89.47 38 Β± 142 96.9% 94.5%

Layer Distribution

Layer Requests Percentage Cost Contribution
Layer 0 (Reactive) 4,823 4.8% $0.00 (0.0%)
Layer 1 (Tactical) 89,672 89.7% $8.97 (10.0%)
Layer 2 (Deliberative) 5,505 5.5% $80.50 (90.0%)

Despite handling only 5.5% of requests, Layer 2 accounts for 90% of costsβ€”validating the economic case for hierarchical suppression.

Multi-Agent Swarm Learning

Agent Tasks Isolated ρ Swarm ρ Improvement
Agent A 1–20,000 87.2% 87.2% β€”
Agent B 20,001–40,000 88.1% 93.4% +6.0%
Agent C 40,001–60,000 87.9% 95.7% +8.9%
Agent D 60,001–80,000 88.3% 96.8% +9.6%
Agent E 80,001–100,000 88.0% 97.2% +10.4%
Average β€” 87.9% 94.1% +7.0%

Result: 42% reduction in Layer 2 escalations compared to isolated agents.

Ablation Study

Configuration Cost (USD) Accuracy Ξ” Accuracy
Full SPL $89.47 96.9% β€”
No Layer 0 $89.47 96.9% +0.0%
No Layer 1 $1,192.84 98.1% +1.2%
ΞΈ = 0.95 (stricter) $142.31 97.6% +0.7%
ΞΈ = 0.75 (looser) $67.23 94.2% -2.7%
No Shared State $127.83 96.4% -0.5%

Key findings:

  • Disabling Layer 1 increases cost 13.3Γ— while improving accuracy only 1.2%
  • Default threshold ΞΈ = 0.87 optimizes the cost-accuracy tradeoff
  • Shared State contributes 30% additional cost savings through pattern reuse

πŸ” Theoretical Guarantees

Theorem 1: Accuracy Preservation

Let Ξ΅ be the maximum error rate of patterns. Then SPL's overall accuracy satisfies:

Acc_SPL β‰₯ (1 - Ξ΅) Β· ρ + Acc_L2 Β· (1 - ρ)

Corollary: If Ξ΅ ≀ 0.05 and Acc_L2 β‰₯ 0.98, then Acc_SPL β‰₯ 0.95 for all ρ.

Theorem 3: Graceful Degradation

If Layer 2 becomes unavailable, the system maintains accuracy:

Acc_degraded = (1 - Ξ΅) Β· Ξ“(t*)

on inputs where I₁(x) = true.

This formalizes the headless swarm property: accumulated competencies persist even without centralized reasoning resources.


πŸ“š Repository Structure

SPL-FRAMEWORK/
β”œβ”€β”€ README.md                    # This file
β”œβ”€β”€ LICENSE                      # MIT License
β”œβ”€β”€ requirements.txt             # Python dependencies
β”œβ”€β”€ setup.py                     # Package setup
β”œβ”€β”€ spl_arxiv_paper.pdf          # Full paper with proofs
β”‚
β”œβ”€β”€ spl/
β”‚   β”œβ”€β”€ __init__.py              # Package initialization
β”‚   β”œβ”€β”€ agent.py                 # SPL Agent (Definition 1)
β”‚   β”œβ”€β”€ layer0_reactive.py       # Structural validation
β”‚   β”œβ”€β”€ layer1_tactical.py       # Pattern matching + inhibition
β”‚   β”œβ”€β”€ layer2_deliberative.py   # Foundation model + distillation
β”‚   β”œβ”€β”€ shared_state.py          # Collective memory protocol
β”‚   β”œβ”€β”€ pattern.py               # Pattern class (Definition 2)
β”‚   β”œβ”€β”€ cost_tracker.py          # Cost monitoring
β”‚   └── mcp_integration.py       # MCP client support
β”‚
β”œβ”€β”€ examples/
β”‚   β”œβ”€β”€ email_categorization.py  # Email triage (paper Section 6.1)
β”‚   β”œβ”€β”€ multi_agent_swarm.py     # Swarm learning (paper Section 6.5)
β”‚   └── intelligence_compounding.py  # Ξ“(n) curves (paper Section 6.6)
β”‚
β”œβ”€β”€ tests/
β”‚   β”œβ”€β”€ test_layer0.py           # Validation tests
β”‚   β”œβ”€β”€ test_layer1.py           # Pattern matching + inhibition tests
β”‚   β”œβ”€β”€ test_layer2.py           # Distillation tests
β”‚   β”œβ”€β”€ test_shared_state.py     # Collective memory tests
β”‚   └── test_accuracy_bounds.py  # Theorem 1 verification
β”‚
β”œβ”€β”€ comparison/
β”‚   └── baselines/               # FrugalGPT, RouteLLM comparisons
β”‚
└── docs/
    β”œβ”€β”€ ARCHITECTURE.md          # Formal framework details
    β”œβ”€β”€ SHARED_STATE_PROTOCOL.md # Synchronization semantics
    β”œβ”€β”€ INTELLIGENCE_COMPOUNDING.md  # Theorem 2 proof
    └── BENCHMARKS.md            # Full experimental results

πŸ§ͺ Testing

# Run all tests
pytest tests/

# Run with coverage
pytest tests/ --cov=spl/

# Verify accuracy bounds (Theorem 1)
pytest tests/test_accuracy_bounds.py -v

πŸ€– Supported Foundation Models

SPL is foundation model agnostic via MCP:

Provider Models
Anthropic Claude Opus 4.5, Sonnet 4.5, Haiku 4.5
OpenAI GPT-4o, GPT-4 Turbo
Open Source Llama 3, Mistral, Mixtral
Custom Fine-tuned, proprietary, on-premise

πŸ“– Citation

@article{cuce2026spl,
  title={Subsumption Pattern Learning: A Formal Framework for 
         Self-Distilling Swarm Intelligence Through Shared 
         Collective Memory},
  author={Cuce, Pamela},
  journal={arXiv preprint arXiv:2501.XXXXX},
  year={2026},
  institution={Tufts University}
}

🀝 Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Good First Issues

  • Implement additional pattern types (semantic embeddings)
  • Add support for new foundation model providers
  • Benchmark on additional datasets
  • Improve documentation

πŸ“§ Contact

Author: Pamela Cuce β€” pamela.cuce@tufts.edu

Resources:


πŸ“„ License

MIT License β€” see LICENSE for details.


πŸ™ Acknowledgments

SPL builds on foundational research from:

  • Rodney A. Brooks β€” Subsumption architecture (MIT, 1986)
  • Ronald C. Arkin β€” Behavior-based robotics
  • Albert Bandura β€” Social learning theory
  • James Kennedy & Russell Eberhart β€” Swarm intelligence
  • Daniel Wegner β€” Transactive memory systems

πŸš€ Roadmap

  • v4.0: Automated pattern distillation with learned extractors
  • v4.1: Adaptive threshold learning (ΞΈ, Ξ± optimization)
  • v4.2: Cross-domain pattern transfer
  • v4.3: Large-scale deployment (100+ agents)
  • v5.0: Continuous learning from production traffic

SPL provides a principled path toward AI systems that grow more intelligent with every transaction while maintaining robustness through decentralized resilience.

Made with ❀️ β€” Bringing 40+ years of robotics intelligence to modern foundation models.

About

SUBSUMPTION PATTERN LEARNING (SPL) MULTI-AGENT FRAMEWORK: Hierarchical foundation model agent architecture that reduces costs by 10-50x through intelligent suppression of expensive foundation model calls. Grounded in R. Arkin's behavior-based robotics and R. Brooks' subsumption architecture, SPL brings 40+ years of proven autonomous systems design

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages