SEP-Memory is a research and engineering project that integrates the Summarize–Explain–Predict (SEP) forecasting framework with a multi-layer memory system to improve both the accuracy and interpretability of stock predictions.
The system enhances predictive modeling by combining LLM-driven explanations with structured memory layers, reinforcement learning, and efficient retrieval.
- Short-Term Memory: Daily summaries of news, tweets, and stock movements
- Mid-Term Memory: Refined explanations and self-reflections for correction
- Long-Term Memory: Consolidated high-reward patterns and signals
- Reflection Memory: Error cases and failed predictions for targeted retraining
- Retrieve relevant past insights with FAISS-powered embeddings
- Inject memory context into prompts for more consistent and reliable explanations
- Enable self-reflective correction loops in explanations
- Summarize → Ingest daily market data & generate structured summaries
- Explain → LLM produces reasoning → reflection step refines explanations
- Predict → GRPO policy generates trade signals
- Reinforce → Rewards from real price movements written back into memory layers
- Important knowledge is automatically promoted:
short → mid → long-term memory - Low-value or stale knowledge is pruned
- LLM Backbone: Transformers + PEFT (LoRA, 4-bit QLoRA)
- Reinforcement Learning: GRPO with reward models
- Memory System: Custom
MemoryDB+BrainDBwith multi-layer storage - Retrieval: OpenAI embeddings + FAISS for sub-second lookup
- Training Data: Daily financial news + social media streams
flowchart TD
A[Market Data] --> B[Summarize]
B --> C[Explain v1]
C --> D[Self-Reflection with Memory]
D --> E[Explain v2]
E --> F[Predict with GRPO Agent]
F -->|Rewards| G[Update Long-Term & Reflection Memory]
G -->|Promote/Prune| B