Skip to content

Shojaeei/Lya

Repository files navigation

Lya

Lya

Lya Premium Experience

Production-Grade Autonomous AGI Agent Framework
Clean Architecture · Emotional Intelligence · Self-Evolving Tools · Multi-LLM Orchestration

Python License Build Status Stars


Lya is a self-contained, enterprise-ready AGI agent framework designed around Clean Architecture and Domain-Driven Design principles. It orchestrates Ollama models, persistent vector memory, and a dynamic tool registry into a unified autonomous system capable of reasoning, planning, coding, and self-improvement — with genuine personality and emotional intelligence.

Important

Ollama Native: Lya is exclusively designed and optimized for Ollama. It leverages local inference for privacy, speed, and reliability. This makes Lya a "local-first" AGI system that does not rely on cloud LLM APIs.

Unlike conventional chatbot wrappers, Lya operates as an event-driven cognitive loop: it observes, thinks, and acts through composable workflows, CQRS command pipelines, and a multi-agent orchestration layer. It was built from day one for extensibility via the Model Context Protocol (MCP) and ships with first-class support for Telegram integration.


Quick Start

Install

# Clone and install
git clone https://github.com/Shojaeei/Lya.git
cd Lya
pip install -e .

# Run interactive setup
lya install

Or use the one-line installer:

Linux / macOS:

curl -fsSL https://raw.githubusercontent.com/Shojaeei/Lya/main/install.sh | bash


CLI Commands

Command Description
lya install Interactive guided setup
lya install -p minimal -u Unattended minimal install
lya doctor Health check & diagnostics
lya doctor --fix Auto-fix detected issues
lya config show Display current configuration
lya config set KEY VALUE Set a configuration value
lya config validate Validate configuration
lya service start Start Lya background service
lya service stop Stop Lya service
lya service status Show service status
lya service restart Restart Lya service
lya run Run Lya agent interactively
lya dashboard Live monitoring dashboard
lya version Show version info
lya about Show system information

Telegram Commands

Command Description
/start Welcome message and user registration
/help Full command reference
/chat <message> Chat mode (default)
/reset Clear conversation and reset context
/status System health and resource usage
/logs Recent logs
/model <model> Switch LLM model
/tools List available tools
/skills List available skills
/tool <name> <input> Execute a tool directly
/feedback <rating> <text> Submit feedback
`/create doc <title> `
`/create pdf <title> `
/create excel <data> Create Excel spreadsheet
/create image <path> <action> Process image (info/thumb/resize/convert)
/marketplace [search|install|list] Browse capability marketplace
/explain [trace|decision|explain] Explainability engine
/update Update tools and capabilities
/emergency Emergency restore dashboard (Owner-only)
/restore List and list backups for restoration (Owner-only)
/capabilities Show system capabilities
/dashboard Open dashboard
/settings Configure settings & Autonomous Dashboard
/cancel Cancel ongoing operation
/id Show your user ID

Group Chat Features

Command Description
Reply to message + /edit <instruction> Edit generated code
/extend <n> Extend current session by n hours
/context Manage saved conversation contexts

Installation Profiles

Profile Description
minimal Core only, for testing/development
standard Full features, recommended for most users
enterprise Production with monitoring & security
# Install with specific profile
lya install --profile standard

# Unattended install (CI/CD)
lya install --profile enterprise --unattended

Architecture

┌─────────────────────────────────────────────────────────────┐
│                      Adapters Layer                         │
│   Telegram · Discord · Slack · REST API · CLI · WebSocket   │
├─────────────────────────────────────────────────────────────┤
│                    Application Layer                        │
│   Commands · Queries · Event Handlers · CQRS Pipeline       │
├─────────────────────────────────────────────────────────────┤
│                      Domain Layer                           │
│   Agent · Goal · Task · Memory · Personality · Events       │
├─────────────────────────────────────────────────────────────┤
│                   Infrastructure Layer                      │
│   LLM Providers · Vector DB · Tool Registry · Security      │
│   Self-Improvement · Workflows · Health Monitoring          │
│   Coding Agent · Multi-Agent Orchestrator · Spec Engine     │
└─────────────────────────────────────────────────────────────┘

Lya follows a strict four-layer Clean Architecture with dependency inversion. Domain entities have zero external dependencies. Infrastructure implementations are injected at runtime. All cross-layer communication flows through well-defined ports and adapters.


Core Capabilities

Autonomous Agent Loop

Capability Technology Description
Cognitive Loop Observe → Think → Act Async event-driven agent loop with goal decomposition and task scheduling
Multi-Agent Orchestration Planner · Coder · Reviewer · Tester MetaGPT/CrewAI-style multi-role pipeline for complex software tasks
Hardened Self-Healing Manual Approval Gateway Detects crashes, proposes fixes, and waits for owner approval before patching
Emergency Restore Rollback & Backup Dashboard Owner-only interface for instant one-click rollbacks and point-in-time system restoration
Autonomous Governance Auto-Update & Evolution Fully autonomous self-governance: checks for updates, triggers evolution loops, and self-heals in 'Silent Mode'
Self-Awareness Repository Introspection Specialized explore_repo tool allows Lya to map her own architecture and understand her core logic
Spec-Driven Development Markdown → Code + Tests Parses specifications into structured requirements and implements them autonomously

Intelligence & Memory

Capability Technology Description
Persistent Memory ChromaDB / Qdrant Episodic, semantic, and procedural memory with decay, consolidation, and relevance scoring
Working Memory Buffer Context Manager Token-aware context windowing with priority-based memory retrieval
Personality Engine Russell's Circumplex Model Big Five traits, emotional state (valence/arousal/dominance), and 9 discrete moods
User Adaptation Rapport Tracking Learns communication style, interests, and preferences per user over time

Document & Image Services

Capability Technology Description
Document Creation python-docx, openpyxl, python-pptx, reportlab Create Word, Excel, PowerPoint, and PDF documents
Image Processing Pillow Thumbnails, resize, convert, compress, rotate, flip, borders
Code Execution Sandboxed Python runner Claude-style iteration: LLM generates code → executes → refine
User Directory Storage Per-user workspace Files saved to ~/.lya-workspace/users/{user_id}/

Web & Media Download

Capability Technology Description
Video/Audio Download yt-dlp + curl-cffi YouTube, Instagram, Twitter/X, TikTok, and 1000+ sites with anti-bot bypass
Anti-Bot Fingerprint Rotation curl-cffi TLS impersonation Cycles Chrome → Safari → Firefox fingerprints on failure
Stealth Scraping curl-cffi + Playwright Three-tier bypass: standard HTTP → stealth TLS → full browser automation
Streaming Fallback Telegram Bot API Sends media URLs directly in chat when download fails
Web Search DuckDuckGo Real-time web and news search with region/time filters

Commands:

/create doc <title>|<content>   - Word document
/create pdf <title>|<content>   - PDF document
/create excel [[...]]           - Excel spreadsheet
/create slides [{...}]         - PowerPoint
/create image <path> info|thumb|resize|convert|compress|rotate

Development & Tooling

Capability Technology Description
Autonomous Coding Plan → Code → Test → Commit Full development cycle with iterative test-fix loops and auto-commit
Dynamic Tool Registry PyPI / GitHub hot-install Discovers, installs, and registers new skills at runtime without restart
Capability Marketplace ClawHub integration Browse, search, install, update capabilities from clawhub.com
Unrestricted System Access Native OS integrations Full filesystem, process, and network access with configurable security policies
Code Sandboxing Ephemeral Docker containers Secure isolated execution for untrusted or generated code
Explainability Engine Decision tracing Track and explain agent decisions with reasoning chains

Connectivity & Deployment

Capability Technology Description
Multi-Channel Messaging Discord · Telegram · Slack Unified ChannelManager routes messages across platforms seamlessly
Live Dashboard FastAPI + WebSocket Real-time tool log streaming, session history, and interactive chat UI
MCP Integration Model Context Protocol Native plugin system for GitHub, databases, and third-party tools
Visuomotor Automation PyAutoGUI + MSS Screen capture and desktop UI interaction for visual automation tasks

Multi-Model LLM Routing

Model Type Purpose Configuration
Agent Model General-purpose multimodal LYA_LLM_DEFAULT_MODEL
Vision Model Image analysis LYA_LLM_VISION_MODEL
Coding Model Code generation & review LYA_LLM_CODE_MODEL
Reasoning Model Complex reasoning & architecture LYA_LLM_REASONING_MODEL

Configuration

Lya is configured via environment variables (.env file). The installer walks you through this interactively, or you can edit .env directly:

Variable Default Description
LYA_LLM_PROVIDER ollama LLM backend: ollama
LYA_LLM_DEFAULT_MODEL llama3 Primary multimodal model
LYA_LLM_VISION_MODEL llava Image analysis model
LYA_LLM_CODE_MODEL qwen2.5-coder Code generation model
LYA_LLM_REASONING_MODEL deepseek-v3 Complex reasoning model
LYA_LLM_TOOL_CALLING_MODE auto Tool calling: auto, native, prompt
LYA_PERSONALITY_ENABLED true Enable personality and emotional state engine
LYA_PERSONALITY_DEFAULT_TONE friendly Default tone: friendly, formal, playful, calm
LYA_TELEGRAM_BOT_TOKEN Telegram bot token for deployment
LYA_WORKSPACE_DIR ~/.lya-workspace Directory for downloads, uploads, and projects
LYA_VOICE_ENABLED false Enable voice features (setup during installation)
LYA_SANDBOX_ENABLED true Enable isolated code execution

See .env.example for the full configuration reference.


Documentation

Section Description
Installation Guide Detailed setup instructions and requirements
Configuration Reference All environment variables and their effects
Architecture Overview Clean Architecture layers and design decisions
Data Flow & CQRS Command/query separation and event flow
Tool & Plugin System MCP integration and custom tool development
Autonomous Governance Self-Healing, Auto-Updates, and Evolution loops
Development Guide Contributing, testing, and code style

Requirements

Software

  • Python 3.11 – 3.13
  • Git (for installation)
  • Ollama (local or remote access)
  • Optional: Docker (for code sandboxing), ChromaDB/Qdrant (for persistent memory)

Hardware (Minimum)

Component Requirement
CPU 4+ Physical Cores
RAM 8GB (16GB+ recommended for local LLM)
GPU NVIDIA/AMD with 4GB+ VRAM (highly recommended)
Storage 10GB+ free space (SSD preferred)
OS Linux (Ubuntu/Debian tested), macOS

Contributing

Lya is designed to be extended. We welcome contributions of all kinds — new tools, channel integrations, memory backends, and agent capabilities.

  1. Fork the repository
  2. Create a feature branch (git checkout -b feat/my-feature)
  3. Commit your changes (git commit -m 'feat: add my feature')
  4. Push and open a Pull Request

Please review the Development Guide for code style and testing conventions.


License

Licensed under the Apache License 2.0.

Copyright 2024–2026 Shojaeei. All rights reserved.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages