- Install in development mode:
pip install -e . - Run CLI:
df-agent(after installation) - Run tests:
pytest(single test:pytest tests/unit/tools/test_file.py::test_function) - No linting tools configured - follow PEP 8 style guidelines
- Imports: Group imports: standard lib, third-party, local modules
- Formatting: 4-space indentation, max line length 88 chars
- Types: Use type hints for all function parameters and returns
- Naming: snake_case for functions/variables, PascalCase for classes
- Error handling: Return dicts with 'status' and 'error_message' keys
- Docstrings: Use Google-style docstrings with Args/Returns sections
- Agent structure: Follow existing pattern with model, name, description, instruction, tools
- Modular Architecture: Organized into core/, config/, types/ packages for better separation of concerns
- Agent System: Located in
src/agents/with unified agent design and specialized CFD tools - CLI Interface: Dedicated
src/cli/package with command handling and session management - Core Components:
src/core/contains message processing, providers, sessions, streaming, and tools - Configuration:
src/config/handles configuration loading and management - Type System:
src/types/provides common type definitions and interfaces - Dynamic Discovery: AGENT_REGISTRY system enables automatic agent registration and loading
- Tools: Functions with typed parameters returning status dictionaries for robust error handling
- Model Provider: Uses LiteLlm with deepseek/deepseek-chat for agent intelligence
- Dynamic Agent Discovery: AGENT_REGISTRY dictionary enables centralized agent management
- Automatic Registration: Agents register themselves on import for seamless discovery
- Circular Import Resolution: Registry system eliminates circular dependencies between CLI and agent modules
- Extensible Architecture: Easy addition of new agents following established patterns
- Integration Utilities: Helper functions in agent_integration.py for registry-based agent loading
- Keep functions simple unless composable/reusable
- Avoid unnecessary destructuring, else statements, try/except blocks
- Prefer single-word variable names where possible
- Avoid
Anytype, use specific types - Agent instructions in separate prompt.py files
- Tool functions return status dicts, log execution with print statements
- IMPORTANT: Never use default parameter values in tool function signatures - Google AI schema validation prohibits defaults
- CRITICAL: Never modify config.yaml without explicit user query - configuration changes must be user-initiated
- StreamingEvent: Inherits from Google ADK Event class for better content handling and Event compatibility
- Enhanced with rich metadata fields: event_category, performance_metrics, error_context, event_chain, tool_metadata
- Supports event relationships and performance tracking for comprehensive agent execution monitoring
- Maintains backward compatibility with dict-like access while leveraging Event class capabilities
- ALWAYS call todoread at the start of conversations to check existing progress and understand current task state
- For complex tasks (3+ steps), IMMEDIATELY create structured todos using todowrite with specific, actionable items
- Call todoread frequently throughout conversations to track progress, prioritize next steps, and maintain context
- Update todo status after completing tasks or encountering issues (mark as 'completed', 'in_progress', or 'cancelled')
- Break down user requests into specific, actionable todo items with unique IDs for systematic tracking
- Use todowrite to modify existing todos when plans change or new requirements emerge
- Coordinate multi-step workflows by maintaining todo state across conversation turns
- DeepFlame Simulation Setup Protocol: ALWAYS set "stopAt" key in "system/controlDict" to "writeNow" during initial setup stages to ensure setup correctness verification before running full production simulations
- Ydefault File Configuration: Ydefault file in 0/ directory provides default mass fractions for chemical species without explicit boundary condition files - essential for combustion cases with multiple species
- Absolute Path Requirements: ALWAYS use absolute paths for chemical mechanism file references in CanteraTorchProperties - relative paths will cause simulation failures
- Tool Selection Criteria: Use edit tool for modifications with fuzzy matching/multi-line changes; use write tool for simple file creation when you have complete content
- Prefer inline Python execution over creating script files for chemical property calculations using Cantera
- Use bash tool with 'python -c "code"' for simple calculations to avoid filesystem clutter
- Critical: When using Cantera Solution() in inline code, ALWAYS use absolute paths for mechanism files
- Only create Python script files when calculations are complex or require multiple statements
- Security validation applies to both inline code and script files - must import Cantera and perform chemical calculations
- Claude Skills-Inspired Architecture: Exploring progressive disclosure system for CFD workflow guidance
- Domain Expertise Packaging: Skills will bundle instructions, metadata, and resources for specialized CFD tasks
- On-Demand Loading: Skills load context progressively rather than consuming upfront context window
- Workflow Specialization: Transform general-purpose agents into CFD specialists through modular capabilities
- Filesystem-Based Skills: Skills exist as directories containing instructions, scripts, and reference materials
Use todowrite/todoread proactively in these scenarios:
- Complex multi-step tasks - When a task requires 3 or more distinct steps or actions
- Non-trivial and complex tasks - Tasks that require careful planning or multiple operations
- User explicitly requests todo list - When the user directly asks you to use the todo list
- User provides multiple tasks - When users provide a list of things to be done (numbered or comma-separated)
- After receiving new instructions - Immediately capture user requirements as todos
- After completing a task - Mark it complete and add any new follow-up tasks
- When you start working on a new task - Mark the todo as in_progress (limit to ONE task at a time)
Skip using todo system when:
- There is only a single, straightforward task
- The task is trivial and tracking it provides no organizational benefit
- The task can be completed in less than 3 trivial steps
- The task is purely conversational or informational
- pending: Task not yet started
- in_progress: Currently working on (limit to ONE task at a time)
- completed: Task finished successfully
- cancelled: Task no longer needed
- Todo state is maintained per session using session_id parameter
- Automatic persistence across conversation turns within the same session
- Session isolation ensures todos from different sessions don't interfere
- State recovery when switching back to a session with existing todos
The read tool provides secure file access for agents with the following features:
Parameters:
filePath(str): Absolute or relative path to the file to readoffset(int): Line number to start reading from (0-based)limit(int): Maximum number of lines to readskip_headers(bool): Skip OpenFOAM-style copyright headers if detected
Security Features:
- External path access: Allows access to files outside project directory (with warnings)
- Binary file detection: Automatically rejects binary/image files
- Content-based binary detection: Scans for null bytes and high non-printable character ratios
Features:
- Line number formatting: Cat -n style with 5-digit padding
- JSON-safe encoding: Content is JSON-encoded to handle special characters and quotes safely
- Long line truncation: Lines > 2000 chars are truncated with "..."
- Pagination: Shows "has_more" flag and continuation hints
- File suggestions: When file not found, suggests similar filenames in the same directory
Usage Examples:
# Read entire file
result = read("src/agents/agent.py")
# Read specific lines
result = read("src/agents/agent.py", offset=10, limit=50, skip_headers=False)
# Handle errors
if result["status"] == "error":
print(f"Error: {result['error_message']}")
else:
print(result["output"])Best Practices:
- Use absolute paths when possible for clarity
- Check
metadata["has_more"]to determine if pagination is needed - Use
offsetandlimitfor large files to avoid memory issues - Handle file not found errors gracefully - suggestions are provided automatically
Fast content search tool that works with any codebase size using regex patterns.
Parameters:
pattern(str): The regex pattern to search for in file contentspath(str, optional): The directory to search in. Defaults to the current working directory.include(str, optional): File pattern to include in the search (e.g. ".py", ".{py,md}")
Features:
- Fast regex search using ripgrep (when available) or Python fallback
- Supports full regex syntax (e.g., "log.*Error", "function\s+\w+", etc.)
- File pattern filtering with include parameter
- Results sorted by file modification time (newest first)
- Result limiting (100 matches max) with truncation indicators
- External path support: Can search directories outside project (with warnings)
- Case-sensitive searches by default
Usage Examples:
# Search for function definitions
result = grep("def \w+", include="*.py")
# Find import statements
result = grep("^import|^from", path="src/agents")
# Locate error handling
result = grep("except|catch|try:", include="*.py")
# Find configuration settings
result = grep("config|settings|parameters", include="*.yaml,*.json")
# Search with multiple file types
result = grep("TODO|FIXME", include="*.{py,js,ts}")Best Practices:
- Use specific include patterns to limit search scope and improve performance
- For complex searches, consider using more specific paths
- Check
metadata["truncated"]to see if results were limited - Use regex anchors (^ and $) for line-start/end matching
- Escape special regex characters when searching for literal strings
Fast file pattern matching tool that works with any codebase size.
Parameters:
pattern(str): The glob pattern to match files againstpath(str, optional): The directory to search in. If not specified, the current working directory will be used.
Features:
- Supports glob patterns like "/*.js" or "src//*.ts"
- Returns matching file paths sorted by modification time
- Result limiting (100 files max) with truncation indicators
- External path support: Can search directories outside project (with warnings)
- Use this tool when you need to find files by name patterns
- When you are doing an open ended search that may require multiple rounds of globbing and grepping, use the Task tool instead
Usage Examples:
# Find all Python files in the project
result = glob("**/*.py")
# Find configuration files
result = glob("*.{yaml,yml,json,toml}")
# Search specific directory
result = glob("*.py", path="src/agents")
# Find documentation files
result = glob("**/*.{md,txt}")Best Practices:
- Use
**/for recursive directory traversal - Combine with specific extensions for targeted searches
- Check
metadata["truncated"]to see if results were limited - Use relative paths for better readability in results
Advanced file editing tool with sophisticated string replacement and fuzzy matching.
Parameters:
filePath(str): Absolute path to the file to modifyoldString(str): The text to replace (can be empty for write operations)newString(str): The text to replace it with (must be different from oldString)replaceAll(bool): Replace all occurrences (default false)
Features:
- 9 different replacer strategies for robust matching:
- Simple exact matching
- Line-trimmed matching (ignores leading/trailing whitespace)
- Block anchor matching with similarity scoring
- Whitespace-normalized matching
- Indentation-flexible matching
- Escape sequence handling
- Trimmed boundary matching
- Context-aware multi-line matching
- Multi-occurrence exact matching
- Empty oldString support: When oldString is empty/whitespace, acts as a write operation
- Fuzzy matching: Uses Levenshtein distance for approximate matches
- Syntax validation: Compiles Python files after editing
- Diff generation: Provides unified diffs of changes
Usage Examples:
# Replace single occurrence
result = edit("src/agents/agent.py", "old_text", "new_text")
# Replace all occurrences
result = edit("config.py", "DEBUG", "PRODUCTION", replaceAll=True)
# Write new content (empty oldString)
result = edit("new_file.txt", "", "New file content")Best Practices:
- Use unique strings to avoid unintended replacements
- For multi-line changes, include sufficient context
- Empty oldString enables write functionality
- Review diffs before applying large changes
File writing tool that creates or overwrites files with content validation.
Parameters:
filePath(str): Absolute path to the file to writecontent(str): The content to write to the file
Features:
- Overwrite protection: Handles existing vs new files
- Diff generation: Shows changes for existing files
- Syntax validation: Compiles Python files after writing
- Path validation: Ensures files are within project directory
- UTF-8 encoding: Proper text encoding with error handling
Usage Examples:
# Create new file
result = write("new_file.txt", "Hello world")
# Overwrite existing file
result = write("existing.txt", "New content")Best Practices:
- Use absolute paths for consistency
- Prefer edit tool for modifications to existing files
- Check result metadata for diff information
Execute safe bash commands for file system exploration and Python chemical calculations.
Parameters:
command(str): The bash command to execute (must be whitelisted)timeout(int): Optional timeout in milliseconds (default 120000ms / 2 minutes)description(str): Clear, concise description of what this command does in 5-10 words
Security Features:
- Command whitelist: Allows read-only commands (
find,ls,pwd,head,tail,wc,file,stat,du,tree), Python execution (python,python3), and path utilities (realpath) - Absolute path enforcement: Requires absolute paths for file references (configurable)
- Path validation: Ensures all file paths stay within project directory
- Python script validation: Scripts must import Cantera and perform chemical calculations
- No destructive operations: Prevents dangerous commands and file operations outside temp directory
Python/Cantera Support:
- Allows execution of Python scripts that use Cantera for chemical stoichiometric calculations
- Validates script content for Cantera imports and chemical calculation patterns
- Blocks dangerous operations like
os.system(),subprocess, and arbitrary file writes - Requires Cantera library to be installed and available
Features:
- Timeout protection: Commands are terminated after specified timeout
- Output truncation: Long outputs are truncated at 30,000 characters
- Comprehensive error handling: Captures both stdout and stderr
- Exit code reporting: Returns command execution status
Usage Examples:
# List directory contents
result = bash("ls -la", 30000, "List directory contents")
# Find files by pattern
result = bash("find . -name '*.py' -type f", 60000, "Find Python files")
# Check file type
result = bash("file constant/polyMesh/boundary", 10000, "Check mesh file type")
# Get word count
result = bash("wc -l system/controlDict", 10000, "Count lines in controlDict")
# Get absolute path for a file
result = bash("realpath src/agents/agent.py", 5000, "Get absolute path for agent.py")
# Run Cantera chemical calculations
result = bash("python calculate_ic.py", 30000, "Calculate initial conditions using Cantera")Best Practices:
- REQUIRED: Use absolute paths for all file references (configurable)
- Use
realpath <path>to get absolute paths for existing files - Example:
realpath src/agents/agent.py→/full/path/to/project/src/agents/agent.py - Avoid relative paths like
../,./, or bare filenames - Specify reasonable timeouts for long-running commands
- Check result status and exit codes
- Use descriptive command descriptions for logging
Directory listing tool with glob pattern filtering.
Parameters:
path(str): Absolute path to the directory to listignore(array): List of glob patterns to ignore
Features:
- Glob pattern filtering for selective listing
- Sorted results by name
- Directory vs file type indication
- Configurable ignore patterns
Usage Examples:
# List all files in directory
result = list("/home/user/project")
# List Python files only
result = list("/home/user/project", ignore=["*.pyc", "__pycache__"])
# List with custom ignore patterns
result = list("/home/user/project", ignore=["*.log", "*.tmp", ".git"])Best Practices:
- Use absolute paths for consistency
- Combine with ignore patterns to filter unwanted files
- Check result status for error handling
Execute OpenFOAM utilities with proper environment sourcing.
Parameters:
command(str): OpenFOAM utility command to executetimeout(int): Execution timeout in millisecondsdescription(str): Description of the command being executed
Features:
- Automatic OpenFOAM environment sourcing
- Timeout protection for long-running commands
- Proper working directory management
- Command execution logging
Usage Examples:
# Run blockMesh
result = openfoam_bash("blockMesh", 30000, "Generate mesh from blockMeshDict")
# Run simpleFoam solver
result = openfoam_bash("simpleFoam", 300000, "Run steady-state solver")
# Check mesh quality
result = openfoam_bash("checkMesh", 10000, "Validate mesh quality")Best Practices:
- Always specify reasonable timeouts
- Use descriptive command descriptions
- Verify utility availability before execution
Execute DeepFlame utilities with proper environment sourcing.
Parameters:
command(str): DeepFlame utility command to executetimeout(int): Execution timeout in millisecondsdescription(str): Description of the command being executed
Features:
- Automatic OpenFOAM + DeepFlame environment sourcing
- Timeout protection for combustion simulations
- Proper working directory management
- Command execution logging
Usage Examples:
# Run combustion solver
result = deepflame_bash("dfLowMachFoam", 600000, "Run combustion simulation")
# Run particle tracking
result = deepflame_bash("dfSprayFoam", 600000, "Run spray combustion simulation")Best Practices:
- Use longer timeouts for simulation runs
- Monitor resource usage for large cases
- Validate case setup before execution
Discover available OpenFOAM utilities in configured directories.
Parameters: None required.
Features:
- Automatic discovery of OpenFOAM installations
- Lists all available utilities and solvers
- Version detection and validation
- Path configuration awareness
Usage Examples:
# Discover available utilities
result = list_openfoam_utils()
# Check for specific solver availability
if "simpleFoam" in result["utilities"]:
print("simpleFoam is available")Best Practices:
- Run before attempting utility execution
- Cache results for performance
- Use for dynamic workflow adaptation
Discover available DeepFlame utilities in configured directories.
Parameters: None required.
Features:
- Automatic discovery of DeepFlame installations
- Lists combustion-specific utilities and solvers
- Integration with OpenFOAM base
- Path configuration awareness
Usage Examples:
# Discover available combustion utilities
result = list_deepflame_utils()
# Check for combustion solvers
if "dfLowMachFoam" in result["utilities"]:
print("Combustion solver available")Best Practices:
- Run before combustion simulations
- Verify OpenFOAM compatibility
- Use for workflow validation
Streaming event system that inherits from Google ADK Event class for enhanced content handling.
Features:
- Event Inheritance: Inherits from Google ADK Event class for proper content structure and metadata
- Automatic Field Copying:
from_event()method copies all Event fields automatically - Streaming-Specific Fields: Adds
streaming_type,streaming_data, anditerationfields - Backward Compatibility: Dict-like access maintained for existing code
- Dynamic Author: Uses actual agent name from
runner.agent.namefor Event author field
Usage Examples:
# Create from Event object (automatic field copying)
streaming_event = StreamingEvent.from_event(
event=runner_event,
streaming_type='response_chunk',
streaming_data={'text': 'chunk content'},
iteration=1
)
# Traditional constructor (manual field setting)
streaming_event = StreamingEvent(
author='deepflame_agent',
streaming_type='tool_call_start',
streaming_data={'tool_name': 'read'},
iteration=2
)
# Dict-like access (backward compatible)
event_type = streaming_event['type']
event_data = streaming_event['data']Best Practices:
- Use
from_event()when creating StreamingEvent from runner Event objects - Author field is automatically set to the actual agent name
- StreamingEvent maintains full Event class compatibility and methods