|
1 | 1 | # Changelog |
2 | 2 |
|
3 | | -All notable changes to this project will be documented in this file. |
| 3 | +All notable changes to this project are documented in this file. |
4 | 4 |
|
5 | 5 | The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/), |
6 | 6 | and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). |
7 | 7 |
|
8 | | -## [Unreleased] |
| 8 | +## [0.1.5] - 2026-02-15 |
9 | 9 |
|
10 | | -### Added |
11 | | -- Planned: |
12 | | - - **Plan / Code modes** in interactive CLI (explicit "planning" vs "coding" flows for complex tasks). |
13 | | - - First‑class support for **open‑source models via third‑party providers** (e.g. OpenRouter, Groq and similar gateways), alongside existing Ollama + cloud integrations. |
14 | | - |
15 | | -### Fixed |
16 | | -- TBC |
17 | | - |
18 | | ---- |
19 | | - |
20 | | -## [0.1.3] - 2025-01-XX |
21 | | - |
22 | | -### Fixed |
23 | | -- **Critical Bug Fix**: Fixed duplicate code generation issue where natural language requests triggered code generation twice, causing unnecessary LLM API calls, high CPU usage, and duplicate code blocks in output |
24 | | -- Removed duplicate code block in natural language processing path that was calling `_process_input()` twice for the same user input |
25 | | - |
26 | | ---- |
27 | | - |
28 | | -## [0.1.2] - 2025-01-XX |
| 10 | +Initial public release of **RLM Code**. |
29 | 11 |
|
30 | 12 | ### Added |
31 | | -- **MCP Integration**: Full Model Context Protocol (MCP) client support with commands for server management (`/mcp-servers`, `/mcp-connect`, `/mcp-disconnect`), tools (`/mcp-tools`, `/mcp-call`), resources (`/mcp-resources`, `/mcp-read`), and prompts (`/mcp-prompts`, `/mcp-prompt`) |
32 | | -- **MCP Documentation**: Complete MCP guides and tutorials (overview, integration reference, filesystem assistant, GitHub triage) |
33 | | -- **MCP Examples**: Working implementations for filesystem assistant and GitHub triage copilot, plus configuration examples |
34 | | -- **MCP Configuration**: Support for stdio, SSE, and WebSocket transports with multiple directory access for filesystem server |
35 | | -- **MCP Error Handling**: Auto-connect for `/mcp-tools`, detailed error messages with troubleshooting tips |
| 13 | +- Unified Textual TUI with tabs for **RLM**, **Files**, **Details**, **Shell**, and **Research**. |
| 14 | +- Recursive execution engine with multiple patterns: **pure RLM**, **harness/code-agent**, and direct LLM flows. |
| 15 | +- Research workflows: run tracking, trajectory capture, replay, benchmark presets, compare/report flows. |
| 16 | +- Sandbox runtime layer (**Superbox**) with profile-driven runtime selection and fallback orchestration. |
| 17 | +- Secure runtime options including Docker and Monty, plus pluggable runtime adapters. |
| 18 | +- LLM integrations for cloud and local model routes, including BYOK workflows and ACP connectivity. |
| 19 | +- Coding harness with optional MCP tool integration for local/BYOK development workflows. |
| 20 | +- Framework adapter surface for RLM-style integrations (including DSPy-native and ADK-oriented paths). |
| 21 | +- Observability integrations (MLflow, LangFuse, Logfire, LangSmith, OpenTelemetry) via sink architecture. |
| 22 | +- Documentation site (MkDocs Material) with onboarding, CLI, TUI, sandbox, integrations, and benchmark guides. |
36 | 23 |
|
37 | 24 | ### Changed |
38 | | -- **Default Performance Settings**: Fast mode now enabled by default, RAG disabled by default for faster initial responses |
39 | | -- `/mcp-tools` command now auto-connects to servers if not already connected |
40 | | -- Improved MCP error messages and session management |
41 | | -- Welcome screen displays performance settings (RAG/Fast Mode status) with contextual tips |
42 | | -- Code generation completion messages now include tips to enable RAG for better quality when disabled |
43 | | - |
44 | | -### Fixed |
45 | | -- Fixed `/mcp-tools` command failing when server not connected |
46 | | -- Improved error handling for MCP connection and configuration issues |
47 | | - |
48 | | ---- |
49 | | - |
50 | | -## [0.1.1] - 2025-11-27 |
| 25 | +- Project identity standardized as **RLM Code** (legacy inherited naming removed from repository-facing surfaces). |
| 26 | +- Packaging and project metadata prepared for open-source release. |
| 27 | +- License updated to **Apache-2.0**. |
51 | 28 |
|
52 | | -### Added |
53 | | -- **UV Support**: Full support for `uv` as an alternative to `python -m venv` for creating virtual environments. Documentation updated to recommend `uv` as the primary method. |
54 | | -- **Performance Toggles**: New `/fast-mode [on|off]`, `/disable-rag`, and `/enable-rag` commands for controlling RAG indexing and response speed. Performance settings now visible in welcome screen and `/status` command. |
55 | | -- **Venv Detection**: Automatic detection of virtual environment in project root with startup warnings if missing. |
56 | | - |
57 | | -### Changed |
58 | | -- Welcome screen now displays RAG Mode and Fast Mode status with context-aware tips. |
59 | | -- Code execution prefers Python from project's `.venv/bin/python` when available. |
60 | | -- Documentation updated to recommend `uv` as the primary installation method. |
61 | | - |
62 | | ---- |
63 | | - |
64 | | -## [0.1.0] - 2025-11-26 |
65 | | - |
66 | | -### Added |
67 | | -- **Interactive CLI**: Rich TUI with natural language interface for generating DSPy Signatures, Modules, and Programs. Core workflows: development (`/init` → generate → `/validate` → `/run`) and optimization (`/data` → `/optimize` → `/eval`). |
68 | | -- **Model Support**: Local Ollama models and cloud providers (OpenAI, Anthropic, Gemini) with interactive `/model` command for easy connection. SDK support via optional extras: `rlm-code[openai]`, `rlm-code[anthropic]`, `rlm-code[gemini]`, `rlm-code[llm-all]`. |
69 | | -- **Code Generation**: Natural language to DSPy code with support for major patterns (ChainOfThought, ReAct, RAG, etc.) and templates for common use cases. |
70 | | -- **Validation & Execution**: `/validate` for code checks, `/run` and `/test` for sandboxed execution. |
71 | | -- **GEPA Optimization**: End-to-end optimization workflows with `/optimize` commands and evaluation metrics integration. |
72 | | -- **MCP Integration**: Built-in MCP client for connecting to external tools and data sources. |
73 | | -- **Project Management**: `/init`, codebase indexing, RAG support, session management, and export/import functionality. |
74 | | -- **Documentation**: Complete docs site (MkDocs Material) with getting started guides, tutorials, and reference documentation. |
75 | | - |
76 | | -### Changed |
77 | | -- Default Ollama timeout increased to 120 seconds for large models. |
78 | | -- Examples updated to use modern models (`gpt-5-nano`, `claude-sonnet-4.5`, `gemini-2.5-flash`). |
79 | | -- Interactive UI improved with Rich library and `DSPY_CODE_SIMPLE_UI` mode for limited emoji support. |
80 | | -- Natural language routing refined to prefer answers for questions and avoid duplicate code generation. |
| 29 | +### Security |
| 30 | +- Safer sandbox-first runtime guidance in docs and configuration defaults. |
| 31 | +- Unsafe local `exec` usage preserved only as an explicit, opt-in path for advanced development scenarios. |
81 | 32 |
|
82 | | -### Fixed |
83 | | -- OpenAI SDK migration to new client API, removed unsupported parameters for newer models. |
84 | | -- Interactive mode errors (`name 'explanations' is not defined`, syntax errors). |
85 | | -- Ollama timeout handling and error messages. |
86 | | -- Documentation formatting and navigation issues. |
| 33 | +[0.1.5]: https://github.com/SuperagenticAI/rlm-code/releases/tag/v0.1.5 |
0 commit comments