Skip to content

Releases: saigontechnology/AgentCrew

v0.12.15

21 Apr 15:46

Choose a tag to compare

v0.12.14

21 Apr 08:11

Choose a tag to compare

What's Changed

🐛 Bug Fixes

  • opencode: Fixed issue preventing Kimi model usage due to missing reason_content field
  • custom_llm: Improved tool parsing for custom LLM services
  • setup: Added openai_codex to the memory LLM configuration

v0.12.13

21 Apr 04:29

Choose a tag to compare

🚀 Release Notes

✨ New Features

  • Provider: Introduced OpenCode Go as a new provider

🐛 Bug Fixes

  • LLM: Corrected parameters for Qwen models

🧹 Maintenance & Chores

  • LLM: Removed deprecated model
  • Docker: Pinned the a2a-sdk version for build stability

📝 Summary

This release adds OpenCode Go as a new provider option, fixes Qwen model parameter configuration, removes a deprecated LLM model, and pins the a2a-sdk Docker dependency for more predictable builds.

v0.12.12

20 Apr 16:22

Choose a tag to compare

✨ Features

  • LLM: Added Qwen 3.6 support (8a8ae0d)
  • Agent: Allowed agents in remoting mode to have memory capabilities (9ab9ab4)

🔧 Improvements

Voice

  • Made voice system more robust (eb982e7)
  • Refined all voice-related code (25ce1a1)
  • Clean audio buffer if voice is not completed (68720b0)
  • Corrected voice behavior (c1f8eb2)
  • Made voice more natural (ca005ad)
  • Added more speak usage scenarios (b2b3e7b)
  • Made agent speak more verbose (304c2b3)
  • Speak tool now only available when user sets it via argument (63fca05)

Memory

  • Fixed bug where search memory could miss entries in to_date range (228362a)
  • Allowed passing session_id to maintain agent across a2a with clean memory (d9e3fab)
  • Decoupled memory tool and agent manager; now uses registrar as source of agent (9e3b6b7)

General

  • Removed deprecated function (29a1906)

v0.12.11

17 Apr 08:50

Choose a tag to compare

✨ Features

  • A2A Memory Support: Enabled user-controlled memory on A2A workflows when explicitly setting memory-path parameter
  • Environment Configuration: Added support for using environment variables to configure settings
  • Voice Communication: Reworked voice functionality to enable agents to use it as a communication tool
  • LLM Models:
    • Added Claude Opus 4.7 model for Copilot integration
    • Added GoldenEye model for GitHub integration

🔧 Improvements

  • LLM Content: Corrected thinking content generation for improved response quality
  • System Prompt: Cleaned and optimized evolved system prompt for better efficiency and clarity

v0.12.10

15 Apr 14:57

Choose a tag to compare

Release Notes

✨ Features

  • Docker: Predownload model in Docker for faster inference time
  • LLM: Add Gemma 4 and GLM 5.1 support in DeepInfra

🐛 Bug Fixes

  • GUI: Prevent double plan display on GUI
  • Fork: Fixed bug where fork title displayed the same as the main conversation

🔧 Chores & Improvements

  • GUI: Group state for easier maintenance
  • Conversation: Ensure conversation saves successfully if stream is interrupted

v0.12.9

15 Apr 04:47

Choose a tag to compare

🎉 Release Changelog

✨ Features

  • grep: Allow grepping in multiple paths
  • code_analysis: Agent can now focus on a specific scope when using read_repo

🐛 Bug Fixes

  • memory: Fixed an issue that would cause crash on macOS

🔧 Chore

  • general: Fixed small issue with last provider with Codex

v0.12.8

14 Apr 08:34

Choose a tag to compare

Release Notes

✨ Features

  • Memory: Allow memory to collect all assistant messages during the work, not just the last message
  • UI: Show the agent evaluation as the plan for transparency

🔧 Chores & Maintenance

  • Transfer: Only transferable post action should be defer
  • Test: Fixed test
  • Browser: No need to re-compute the element UUID after action anymore
  • Evolve: Refactored code of evolution
  • Evolve: Correctness the evolution agent
  • Log: Cleanup the log

v0.12.7

13 Apr 07:59

Choose a tag to compare

What's Changed

🐛 Bug Fixes

  • Stream Module: Increased first chunk timeout and fixed JSON dump serialization
  • OpenAI Integration: Ensured refresh token timing is synchronized with Codex CLI

📦 Commits

  • 6c16452 - fix(stream): increase first chunk timeout and fix json dump
  • 470442a - fix(openai): make sure the refresh time matched with codex cli

Full Changelog: View on GitHub

v0.12.6

13 Apr 06:06

Choose a tag to compare

Release Notes

🐛 Bug Fixes

  • Agent Transfer: Fixed issue where transfer operations still contained planning rules that should have been removed

✨ Features

  • Stream Control: Users can now stop streams immediately without waiting for timeout to occur

🔧 Chores & Improvements

  • Console: Enhanced console input handling to support multiline input for custom answers and tool denial responses
  • OpenAI Integration: Added support for "xhigh" reasoning effort level
  • LLM Defaults: Ensured default reasoning is consistently applied across all LLM operations