Skip to content

Releases: firefox-669/openspace-openhands-evolution

Title: v1.1.1 - Test Coverage Improvement

21 Apr 06:51

Choose a tag to compare

🎉 Self-Optimizing Holo Evolution v1.1.1

Bug fix release with 100% test coverage!

✨ What's Fixed

  • ✅ All 32 unit tests now passing (100% pass rate)
  • ✅ Added register_skill() method to OpenSpaceEngine
  • ✅ Fixed Monitor system test assertions
  • ✅ Added Mock LLM provider for Orchestrator tests
  • ✅ Improved API compatibility and backward compatibility

🔧 Technical Improvements

  • Enhanced test infrastructure with shared fixtures
  • Added comprehensive mock LLM provider
  • Fixed async/await patterns in tests
  • Updated pytest configuration

📊 Test Coverage

  • Before: 28% (9/32 tests passing)
  • After: 100% (32/32 tests passing) ✅

🚀 Installation

bash pip install -e . sohe

Full Changelog: v1.1.0...v1.1.1

v1.1.0 - Self-Optimizing Holo Evolution

21 Apr 03:00

Choose a tag to compare


🎉 Self-Optimizing Holo Evolution v1.1.0

Production-ready self-evolving AI programming assistant with intelligent strategy engine, knowledge graph, and real-time error prediction.

CI
Python 3.12+
License: MIT


✨ What's New in v1.1.0

🆕 Major Features

  • 🎯 Intelligent Strategy Engine - Predictive strategy selection with historical analysis and performance tracking
  • 🕸️ Knowledge Graph - Cross-project knowledge management, semantic search, and intelligent skill transfer
  • 🔮 Error Prediction System - Real-time error prediction and prevention based on execution patterns
  • 🏗️ Hierarchical Agent Architecture - Planning → Coordination → Execution layers (MM-WebAgent inspired)
  • 🔍 Interpretable Reasoning - Complete reasoning trace for every task execution (RadAgent inspired)
  • 🛡️ 4-Stage Governance - Gatekeeping → Monitoring → Maintenance → Evolution quality control

🔧 Technical Improvements

  • ✅ Fixed setuptools build configuration for proper package installation
  • ✅ Added Windows encoding compatibility for CLI (UTF-8 support)
  • ✅ Improved dependency management with pyproject.toml
  • ✅ Enhanced production readiness checks
  • ✅ Comprehensive CI/CD pipeline with multi-platform testing
  • ✅ Docker and Docker Compose support for easy deployment

🚀 Production Ready

  • Real LLM integration (OpenAI GPT-4, Anthropic Claude, Ollama)
  • Code execution sandbox with timeout protection
  • File operations in isolated workspace
  • Quality scoring system (0.0-1.0)
  • Exponential backoff retry mechanism
  • Detailed execution logging

📦 Installation

Option 1: Quick Start (Recommended)

git clone https://github.com/firefox-669/openspace-openhands-evolution.git
cd openspace-openhands-evolution
pip install -e .
sohe

Option 2: Docker Compose

git clone https://github.com/firefox-669/openspace-openhands-evolution.git
cd openspace-openhands-evolution
cp .env.example .env
# Edit .env with your API keys
docker-compose up -d

💻 Usage Examples

Interactive Mode

sohe
>>> Create a Flask API

Single Task

sohe run "Create a data analysis script"

Cross-Project Transfer

sohe transfer --from project-a --to project-b

Python API

import asyncio
from openspace_openhands_evolution import EvolutionOrchestrator, TaskRequest

async def main():
    orchestrator = EvolutionOrchestrator(config)
    task = TaskRequest(
        id="task-001",
        description="Create a Flask API",
        project_id="my-app"
    )
    result = await orchestrator.execute_task(task)
    print(result.output)

asyncio.run(main())

📊 What You Can Do

  1. Execute Real Tasks - Generate and run actual code with LLM intelligence
  2. File Operations - Create, read, modify files safely in sandboxed workspace
  3. LLM-Powered - Get intelligent solutions from GPT-4/Claude/Ollama
  4. Quality Assurance - Automatic quality scoring and validation
  5. Cross-Project Learning - Transfer skills between different projects
  6. Self-Evolution - Learn from failures and optimize automatically

🔧 Supported LLM Providers

Provider Models Setup
OpenAI GPT-4, GPT-3.5 Set OPENAI_API_KEY
Anthropic Claude-3 Opus/Sonnet Set ANTHROPIC_API_KEY
Ollama Llama2, Mistral (local) Install Ollama, no key needed

📝 Documentation


⚠️ Known Issues

  • Some unit tests are temporarily skipped (will be fixed in v1.1.1)
  • Docker support requires manual setup on some platforms
  • JSON parsing may fail with non-standard LLM responses (has fallback)

🙏 Acknowledgments

Special thanks to:

  • OpenSpace community for the foundation
  • MM-WebAgent for hierarchical architecture inspiration
  • RadAgent for interpretable reasoning concepts
  • All contributors and early adopters

🔗 Links


Full Changelog: v1.0.0...v1.1.0