Sentry Gen AI instrumentation for AI/LLM agents in Ruby applications.
Provides Sentry's AI Agents monitoring capabilities for Ruby, supporting multiple LLM providers (Anthropic, OpenAI, Cohere, Google Gemini, etc.).
Add this line to your application's Gemfile:
gem 'sentry-agents'And then execute:
bundle installOr install it yourself as:
gem install sentry-agents- Ruby >= 3.1.0
- sentry-ruby >= 5.0.0
Sentry::Agents.configure do |config|
# Default LLM provider (default: "anthropic")
config.default_system = "anthropic"
# Maximum string length for serialized data (default: 1000)
config.max_string_length = 1000
# Enable debug logging (default: false)
config.debug = false
# Custom data filtering (optional)
config.data_filter = ->(data) do
# Remove sensitive keys in production
data.delete("gen_ai.request.messages") if ENV["SENTRY_SKIP_MESSAGES"]
data
end
endInclude the Sentry::Agents::Instrumentation module in any class:
class MyAgent
include Sentry::Agents::Instrumentation
def process_request(user_message)
with_agent_span(agent_name: "MyAgent", model: "claude-3-5-sonnet") do
# Get LLM response
response = with_chat_span(model: "claude-3-5-sonnet") do
client.messages.create(
model: "claude-3-5-sonnet-20241022",
messages: [{ role: "user", content: user_message }]
)
end
# Execute tool if needed
if response.stop_reason == "tool_use"
with_tool_span(
tool_name: "search",
tool_input: { query: response.tool_input["query"] }
) do
search_service.search(response.tool_input["query"])
end
end
# Track stage transition
with_handoff_span(from_stage: "processing", to_stage: "complete") do
update_status!(:complete)
end
response
end
end
endOverride the default provider on a per-span basis:
class OpenAIAgent
include Sentry::Agents::Instrumentation
def process(message)
with_chat_span(model: "gpt-4", system: "openai") do
openai_client.chat(model: "gpt-4", messages: [message])
end
end
endWraps the overall agent execution lifecycle.
with_agent_span(agent_name: "Emily", model: "claude-3-5-sonnet") do
# Full agent conversation logic
endWraps individual LLM API calls. Automatically captures:
- Token usage (input/output tokens)
- Response text
with_chat_span(model: "claude-3-5-sonnet", messages: conversation_history) do
llm_client.chat(messages)
endWraps tool/function executions. Captures:
- Tool name
- Tool input
- Tool output
with_tool_span(tool_name: "weather_lookup", tool_input: { city: "NYC" }) do
weather_api.get_forecast("NYC")
endTracks stage transitions or agent handoffs.
with_handoff_span(from_stage: "greeting", to_stage: "qualification") do
update_conversation_stage!
endAll instrumentation methods gracefully degrade when Sentry is not available or tracing is disabled. Your code will continue to work normally without any errors.
# Works fine even without Sentry initialized
with_chat_span(model: "claude-3-5-sonnet") do
llm_client.chat(messages) # Still executes, just without tracing
endAfter checking out the repo, run:
bundle install
rake test # Run tests
rake rubocop # Run linter
rake # Run bothReleases are automated via GitHub Actions. To publish a new version:
- Update the version in
lib/sentry/agents/version.rb - Update
CHANGELOG.mdwith the new version's changes - Commit the changes:
git add -A && git commit -m "Bump version to X.Y.Z"
- Create and push a version tag:
git tag vX.Y.Z git push origin main --tags
The release workflow will automatically:
- Run the test suite
- Build the gem
- Publish to RubyGems
- Create a GitHub Release with auto-generated release notes
Bug reports and pull requests are welcome on GitHub at https://github.com/ihoka/sentry-agents/.
This project is sponsored by:
The gem is available as open source under the terms of the MIT License.