Roadmap #2
FSilveiraa
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Open
Plugin config from CLI args - Currently it's only possible to configure plugins from a file configuration.
I'd like to extend this to CLI args, it seems easy to add without breaking anything, and it's just an expected feature. I also have to find a way to have the plugins have some sort of documentation with configuration.
Web interface - I've started the work on a web interface for solveig and I'm convinced it offers some real value - rendering generated HTML and images, allowing deeper visual customization, better visual structuring with collapsible directory trees, etc. However, this is not expected to be available anytime soon.
Commit-on-change Plugin - Add a quick hook plugin to make a git commit upon every destructive filesystem operation (write, delete, etc), allowing the user to revert if necessary.
Show git branch in stats - add a timer that every 1s gets the branch.
Review Tree plugin - make it a built-in tool, add filter lists (so it avoids listing stuff like
node_modules/)Summarize past context - when we hit context limit, messages are just dropped. Any tool result is kept in context until it's cut out from the sliding context window. I should consider how both of these could be fixed.
Edit message queue
Ongoing
Closed
Better UX async flow with cleaner chat flow - ✅ Added 16/04/2026 - Solution was to only display sections when they're different, integrated with schema to work both with live sessions and resumes, provides cleaner UXRight now I expect to get back one big pydantic object with a comment, a task list or a tool list, all optional. Also showing an Assistant section separator between each. For starters, I could take advantage of the autonomous mode more, remove the section separators until the user sends a message, perhaps enforce that the model has to respond in streaming and in a loop of sending either a comment, task list or tool, and receiving back results until it reaches a point where there are no more tools sent and the stream ends, at which point it awaits input.
Unified box design, all boxes are collapsible and have expand + copy buttons on header - ✅ Added 13/04/2026
Add button to copy box content to clipboard - ✅ Added 07/04/2026
MCP support - ✅ Added 07/04/2026
Overall optimization - ✅ Completed 04/04/2026 - The initial focus on Solveig's design was on security, not efficiency. This is mostly forgivable as network and assistant overhead impact runtime several orders of magnitude more than, for example, init'ing the token encoder once per message. Still, I'd like to clean up any unnecessary inefficiencies.
Cancel sent message - ✅ Added 04/03/2026 - Add a way to cancel a request that's waiting easily, maybe Esc / Ctrl+C.
Show queued messages - ✅ Added 04/03/2026 - Display how many messages have been queued to send. Missing allow editing queued messages (easy with current Textual elements).
Hook to convert HTML to MD - ✅ Added 01/03/2026 add a hook plugin that executes after the HTTP tool and optionally converts the read HTML into MD, saving a ton on token count and allowing better AI parsing - since the whole idea of the HTTP Tool anyway is mostly "do a request and pass the output to an AI model" (although it can download files)
HTTP request tool - ✅ Added 01/03/2026
Sub-command for user operations - ✅ Added 27/02/2026 - Extend subcommands to allow the user to optionally run them at will. They can read a file, write a dir, run a command, without asking the LLM.
Session management - ✅ Added 21/02/2026 - allow storing conversations, resuming them, etc. Done, almost perfect match to live session
Session awareness - ✅ Added 21/02/2026 - I'd like to have some sort of persistence. I think this should involve some kind of CLAUDE.md approach, although I would also like to consider some sort of progress tracking. I don't want to assume git is always available for reading.
Add line count to metadata - ✅ Added 19/02/2026
Up arrow to navigate previous messages - ✅ Added 19/02/2026
Living config and model reloading - ✅ Added 18/02/2026 - Allowing the user to change the config, starting with the model. Allows both
/config model set <model name>and/model set <model name>.Pre-check API - ✅ Added 19/01/2026 - Now does an API request for the selected model (or none, if using local), gets back details and fills in stats according to API response (max content length, pricing, actual model name). Application exits if model isn't found.
Markdown responses - ✅ Added 18/01/2026 - Actually added ages ago, but fully confirmed working with real model testing.
Edit tool- ✅ Added 18/01/2026 - Previously, if we wanted to write 2 lines in a 500-line file, we expected the LLM to return 498 unchanged lines with the 2-line edit. Now there's an edit tool that just replaces
old_strwithnew_str. This is the strategy that Claude Code, Gemini-CLI and Qwen-Code use (although Qwen-Code has some very smart fallback strategies that I just don't expect to replicate).Read lines from file - ✅ Added 18/01/2026 - We can now read up to 3 ranges from a file and display them in separate blocks. Also added range validation, better header display to indicate what we're reading and a better tool description for the LLM.
Refactor "requirement" into "tool" - ✅ Added 01/01/2026 - Make the naming more aligned with convention.
Add reasoning display - ✅ Added 29/11/2025 - Born out of necessity, Gemini really needs to store the reasoning when sending messages back and forth on an OpenAI API. That led into displaying a reasoning block in a collapsible body.
Plugin reloading - ✅ Added 25/11/2025 - Simplified, unified and overall better-ified the entire plugin system. Solveig can now both load and reload existing ones from the filesystem on demand, making it easily able to fully reset its plugin system. makes testing safer, concentrates low-level code in a very well-documented shared method, allows extending the plugin system itself with new kinds of plugins now being discoverable+reloadable.
Count tokens from Instructor client - ✅ Added 25/11/2025 - Now uses a hybrid approach for token counting. Encoder counting is mandatory for pruning according to context window before sending to assistant. API messages are used to update real totals when received. This ensures displayed info is always real, while still allowing for pruning and context management.
Shellcheck allows executing anyway - ✅ Added 24/11/2025 - First example of a plugin config. Shellcheck now displays errors with formatting according to severity (err vs everything else), plus allows executing anyway (with Cancel option).
Fully agentic conversation loop - ✅ Added 24/11/2025 - I improved the conversation loop to allow full agency until the LLM reaches a point where it doesn't request operations. This can be configured in the config. Also added default Cancel option that stops all processing and awaits for a new user instruction (even in autonomous mode), and refined the Ctrl+C to clear text if there is any in the input bar, otherwise quit the app. This makes solveig a lot more autonomous and provides a much more expected user experience, both in line with similar tools and also giving users a lot more options. More importantly, it makes solveig both a lot safer ("just stop everything") and more usable (clear a huge text block with ctrl+c, optionally allow the LLM to "keep going until you're done while I do the dishes").
Better textual integration - ✅ Added 22/11/2025 - I added several UI and sort of UX improvements through some Textual features. We now have collapsible directory trees, collapsible stats bar, better formatting overall.
Command persistence - ✅ Added 22/11/2025 - Things like changing cwd, setting environment variables, are now persisted across commands. If I run 2 command requirements, 1 for
cd ~and another forpwdit will now show the expected result. This maintains user expectation across long-running sessions.Allow system prompt to be user-configurable - ✅ Added 07/11/2025 - Easy to add, helps with user adoption, allows for specific instructions for specific models, this should just exist. But the large majority of my existing system prompt (apart from examples) isn't really negotiable, the available tools need some short description and it's not reasonable to expect users to handle that, so perhaps this should be more of system prompt additions than replacements.
Command configuration - ✅ Added 18/10/2025 - Add a
timeoutparameter to CommandRequirement with dual functionality clarified in the description: timeout>=0 captures output, timeout<0 detaches the shell (useful for things like opening a GUI). This also allows the LLM to use a larger-than-large-default timeout for things like a pytest run.Clarify metadata display - ✅ Added 16/10/2025 - metadata is still not being displayed with all the information or using the new theming, pretty easy to improve on.
Chat logging - ✅ Added 16/10/2025 store session logs, control this trough the
auto-logflag to automatically store them andlogs-locationfor where to save. Add this as another sub-command/log ./path/to.logSub-commands - ✅ Added 16/10/2025 - Clarify the
/exitsub-command somewhere besides documentation - maybe the input bar should show/helpin the hints, and sending that one just displays the rest on the interface.Diff view - ✅ Added 15/10/2025 - Operations that alter files (write, move, copy, but not delete) now show a diff view of the source and destination. Currently there is no actual "edit" support where the assistant asks to edit only some lines, the LLM can only ask to write entire files
Code linting - ✅ Added 15/10/2025 - I added code linting with theme support.
Interface re-write - ✅ Added 14/10/2025 - I moved from a basic loop where all interactions were dictated by the conversation to having a better model that looks and behaves like a classic chat window. The project is now fully async and has much better flow control, using proper startup and shutdown using asyncio.Events, no compromises were made using sentinel values or any hacks (besides some extremely necessary and well-documented ones).
Proper theme support - ✅ Added 25/09/2025
Beta Was this translation helpful? Give feedback.
All reactions