Skip to content

aymantaha-dev/MEMORAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🌟 MEMORAI - Persistent AI Memory Server

MEMORAI Banner

Author: Ayman Taha
License: MIT
Version: 1.1.0

Deploy to Render

⚠️ Note: This project provides a foundational memory layer,
not a fully autonomous cognitive memory system.

Table of Contents

Overview

MEMORAI is a persistent AI memory server that acts as middleware between client applications and LLM providers. It mitigates stateless LLM limitations by storing user profile data and conversation memory, then injecting relevant context during chat generation.

Why MEMORAI?

Large Language Models are powerful, but commonly face:

Challenge MEMORAI Solution
Stateless Nature Persistent user identity and conversation history
Limited Context Window Memory retrieval and contextual injection
No Personalization by Default User preferences and custom instructions
Context Drift Over Time Memory pruning and relevance filtering

πŸ—οΈ Architecture

graph TB
    subgraph "Client Applications"
        A["Web Apps"]
        B["Mobile Apps"]
        C["CLI Tools"]
    end

    subgraph "MEMORAI Core"
        D["API Gateway"]
        E["API Key Authentication"]
        F["User Manager"]
        G["Memory Engine"]
        H["Context Builder"]
        I["Provider Router"]
        J["Response Processor"]
    end

    subgraph "Storage Layer"
        K["SQLite/PostgreSQL"]
        M["Memory Index (SQL)"]
    end

    subgraph "LLM Providers"
        N["OpenAI"]
        O["Qwen"]
        P["DeepSeek"]
    end

    A --> D
    B --> D
    C --> D
    D --> E
    E --> F
    F --> G
    G --> H
    H --> I
    I --> N
    I --> O
    I --> P
    N --> J
    O --> J
    P --> J
    G --> K
    H --> M
    J --> D
Loading

Core Components

Component Function Benefit
Memory Engine Stores and retrieves user memories Long-term context retention
Context Builder Constructs prompts with relevant memories Better response continuity
Provider Abstraction Supports multiple provider adapters Extendable integration model
User Manager Handles user profiles and preferences Personalized behavior
Pruning System Manages memory lifecycle Controlled storage growth

✨ Features

🧠 Memory Management

  • Conversation memory persistence
  • Relevance-based memory retrieval
  • Memory pruning endpoint with retention window
  • Context augmentation during chat generation

πŸ‘€ User Profile Support

  • Persistent user identity (user_id)
  • Language and tone preferences
  • Custom instruction storage
  • Last-active tracking and message count

πŸ”Œ Provider Support

  • OpenAI integration
  • Qwen adapter
  • DeepSeek adapter
  • Central provider factory for deterministic routing

πŸ›‘οΈ Security

  • API key protection for /api/v1/* routes
  • Structured JSON error responses
  • Restricted CORS origin policy

πŸš€ Installation

Prerequisites

  • Python 3.10+
  • Git
  • Docker (optional)
  • Provider API key (required for real LLM calls)

Quick Start

Method 1: Local Installation

# Clone the repository
git clone https://github.com/aymantaha3345/memorai.git
cd memorai

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Configure environment
cp .env.example .env
# Edit .env and set INTERNAL_API_KEY + provider keys

# Launch the server
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

Method 2: Docker Deployment

# Build and run with Docker Compose
docker compose up --build

βš™οΈ Configuration

Environment Variables

Variable Required Default Purpose
INTERNAL_API_KEY Yes change-me-in-production API auth key for protected endpoints
OPENAI_API_KEY For OpenAI "" OpenAI API authentication
QWEN_API_KEY For Qwen "" Qwen API authentication
DEEPSEEK_API_KEY For DeepSeek "" DeepSeek API authentication
DATABASE_URL No sqlite:///./data/memorai.db Database connection
DEFAULT_PROVIDER No openai Default provider
MAX_CONTEXT_TOKENS No 8000 Context token limit setting
MEMORY_RETENTION_DAYS No 30 Memory retention period
LOG_LEVEL No INFO Logging verbosity
ALLOWED_ORIGINS No http://localhost:3000,http://127.0.0.1:3000 CORS allow-list

Sample Configuration

# App
APP_ENV=development
APP_NAME=MEMORAI - Persistent AI Memory Server
APP_VERSION=1.1.0
LOG_LEVEL=INFO

# Security
INTERNAL_API_KEY=your-internal-api-key
ALLOWED_ORIGINS=http://localhost:3000,http://127.0.0.1:3000

# Database
DATABASE_URL=sqlite:///./data/memorai.db

# Providers
DEFAULT_PROVIDER=openai
OPENAI_API_KEY=sk-your-openai-key-here
QWEN_API_KEY=your-qwen-key-here
DEEPSEEK_API_KEY=your-deepseek-key-here

πŸ“‘ API Documentation

Base Endpoint

http://localhost:8000

Authentication

Include API key using one of:

X-API-Key: your-internal-api-key

or

Authorization: Bearer your-internal-api-key

Core Endpoints

Chat Completion

POST /api/v1/chat

Request Body:

{
  "user_id": "user-uuid-string",
  "message": "How can you help me today?",
  "provider": "openai",
  "model": "gpt-4o-mini",
  "temperature": 0.7,
  "max_tokens": 1000
}

Response:

{
  "id": "response-id",
  "user_id": "user-uuid-string",
  "message": "Generated assistant reply",
  "timestamp": "2026-01-15T10:30:00Z",
  "tokens_used": 85,
  "memory_injected": true,
  "provider_used": "openai"
}

User Profile Management

GET /api/v1/user/{user_id}

Update User Preferences

PUT /api/v1/user/{user_id}/preferences

Request Body:

{
  "name": "Jane Smith",
  "language_preference": "en",
  "tone_style_preference": "friendly",
  "custom_instructions": "Respond concisely with examples when useful."
}

Memory Pruning

POST /api/v1/memory/prune

Request Body:

{
  "user_id": "user-uuid-string",
  "retention_days": 30
}

Health Check

GET /health

Response:

{
  "status": "healthy",
  "version": "1.1.0",
  "provider_default": "openai",
  "checks": {
    "database": {
      "ok": true,
      "error": null
    }
  },
  "timestamp": "2026-01-15T10:30:00Z"
}

🌐 Deployment

Local Development

# Development mode with auto-reload
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

# Production-like mode
uvicorn app.main:app --host 0.0.0.0 --port 8000

Docker Compose

version: '3.8'
services:
  memorai:
    build: .
    ports:
      - "8000:8000"
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - INTERNAL_API_KEY=${INTERNAL_API_KEY}
      - DATABASE_URL=postgresql://memorai:memorai@db:5432/memorai
      - DEFAULT_PROVIDER=openai
      - LOG_LEVEL=INFO
      - ALLOWED_ORIGINS=http://localhost:3000
    depends_on:
      - db
    restart: unless-stopped

  db:
    image: postgres:15-alpine
    environment:
      - POSTGRES_DB=memorai
      - POSTGRES_USER=memorai
      - POSTGRES_PASSWORD=memorai
    volumes:
      - postgres_data:/var/lib/postgresql/data
    restart: unless-stopped

volumes:
  postgres_data:

Render.com Deployment

Deploy to Render

πŸ› οΈ Development

Project Structure

memorai/
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ main.py
β”‚   β”œβ”€β”€ api/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   └── v1/
β”‚   β”‚       β”œβ”€β”€ __init__.py
β”‚   β”‚       β”œβ”€β”€ chat.py
β”‚   β”‚       β”œβ”€β”€ user.py
β”‚   β”‚       └── memory.py
β”‚   β”œβ”€β”€ core/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ config.py
β”‚   β”‚   β”œβ”€β”€ database.py
β”‚   β”‚   β”œβ”€β”€ security.py
β”‚   β”‚   └── errors.py
β”‚   β”œβ”€β”€ memory/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ engine.py
β”‚   β”‚   β”œβ”€β”€ manager.py
β”‚   β”‚   └── storage.py
β”‚   β”œβ”€β”€ models/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ user.py
β”‚   β”‚   β”œβ”€β”€ memory.py
β”‚   β”‚   β”œβ”€β”€ chat.py
β”‚   β”‚   └── schemas.py
β”‚   β”œβ”€β”€ providers/
β”‚   β”‚   β”œβ”€β”€ __init__.py
β”‚   β”‚   β”œβ”€β”€ base.py
β”‚   β”‚   β”œβ”€β”€ openai.py
β”‚   β”‚   β”œβ”€β”€ qwen.py
β”‚   β”‚   β”œβ”€β”€ deepseek.py
β”‚   β”‚   └── factory.py
β”‚   └── utils/
β”‚       β”œβ”€β”€ __init__.py
β”‚       └── helpers.py
β”œβ”€β”€ tests/
β”‚   β”œβ”€β”€ conftest.py
β”‚   └── test_api.py
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ docker-compose.yml
β”œβ”€β”€ requirements.txt
β”œβ”€β”€ .env.example
β”œβ”€β”€ readme.md
└── LICENSE

Running Tests

# Run all tests
pytest tests/ -v

# Run specific test file
pytest tests/test_api.py -v

🀝 Contributing

Getting Started

  1. Fork the repository
  2. Clone your fork: git clone https://github.com/yourusername/memorai.git
  3. Create a feature branch: git checkout -b feature/amazing-feature
  4. Make your changes
  5. Test thoroughly
  6. Commit your changes: git commit -m 'Add amazing feature'
  7. Push to your branch: git push origin feature/amazing-feature
  8. Open a Pull Request

Development Guidelines

  • Follow PEP 8 coding standards
  • Write clear docstrings for non-trivial logic
  • Include tests for new features
  • Update documentation when behavior changes
  • Keep commits atomic and descriptive

πŸ“„ License

MIT License - See LICENSE file for details.

πŸ†˜ Support


🌟 Built with ❀️ by Ayman Taha
Making AI Conversations Truly Conversational

πŸš€ Deploy Now | πŸ“š Documentation | πŸ› Report Bug

About

Persistent AI memory server for storing, retrieving, and managing conversational context and personalization data for intelligent applications.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors