- Where's the name DeerFlow come from?
- Which models does DeerFlow support?
- How do I view complete model output?
- How do I enable debug logging?
- How do I troubleshoot issues?
DeerFlow is short for Deep Exploration and Efficient Research Flow. It is named after the deer, which is a symbol of gentleness and elegance. We hope DeerFlow can bring a gentle and elegant deep research experience to you.
Please refer to the Configuration Guide for more details.
If you want to see the complete model output, including system prompts, tool calls, and LLM responses:
-
Enable debug logging by setting
DEBUG=Truein your.envfile -
Enable LangChain verbose logging by adding these to your
.env:LANGCHAIN_VERBOSE=true LANGCHAIN_DEBUG=true
-
Use LangSmith tracing for visual debugging (recommended for production):
LANGSMITH_TRACING=true LANGSMITH_API_KEY="your-api-key" LANGSMITH_PROJECT="your-project-name"
For detailed instructions, see the Debugging Guide.
To enable debug logging:
- Open your
.envfile - Set
DEBUG=True - Restart your application
For Docker Compose:
docker compose restartFor development:
uv run main.pyYou'll now see detailed logs including:
- System prompts sent to LLMs
- Model responses
- Tool execution details
- Workflow state transitions
See the Debugging Guide for more options.
When encountering issues:
- Check the logs: Enable debug logging as described above
- Review configuration: Ensure your
.envandconf.yamlare correct - Check existing issues: Search GitHub Issues for similar problems
- Enable verbose logging: Use
LANGCHAIN_VERBOSE=truefor detailed output - Use LangSmith: For visual debugging, enable LangSmith tracing
For Docker-specific issues:
# View logs
docker compose logs -f
# Check container status
docker compose ps
# Restart services
docker compose restartFor more detailed troubleshooting steps, see the Debugging Guide.