Skip to content

Latest commit

 

History

History
220 lines (171 loc) · 9.36 KB

File metadata and controls

220 lines (171 loc) · 9.36 KB

AgentFarm documentation

Start here for navigation; detailed guides live in this directory.

If you are new, read the User Guide, then the Developer Guide for setup and contribution patterns.

📚 Documentation overview

AgentFarm is a simulation and analysis platform for agent-based modeling and reinforcement learning experiments. The sections below link deeper guides and references.

🗂️ Structure

🚀 Quick Start Guides

🏗️ Core System Components

Agent System

  • Agents - Agent architecture, types, and behaviors
  • Perception - Sensory systems and observation processing
  • State System - Agent state management and transitions

Action System

Observation & Channels

Sparse Observation Storage (HYBRID mode)

The observation system supports tensor-backed sparse point storage via SparsePoints for channels that are naturally sparse (e.g., allies/enemies/trajectories). This reduces Python dict overhead and improves GPU transfers.

  • Configuration (in ObservationConfig):

    • storage_mode: HYBRID (default) uses SparsePoints for point-sparse channels; DENSE writes directly to a dense tensor.
    • sparse_backend: "scatter" (default) or "coo". Use coo when sum reduction with many duplicates is common.
    • default_point_reduction: "max" (default), "sum", or "overwrite".
    • channel_reduction_overrides: per-channel overrides by channel name.
  • Reductions:

    • max: keep maximum per index (deterministic, good for presence maps)
    • sum: accumulate contributions (good for intensities)
    • overwrite: last write wins (order-dependent; not deterministic with duplicates)
  • Metrics via AgentObservation.get_metrics():

    • dense_bytes, sparse_points, sparse_logical_bytes, memory_reduction_percent
    • cache_hits, cache_misses, dense_rebuilds, dense_rebuild_time_s_total
    • sparse_apply_calls, sparse_apply_time_s_total

Example:

config = ObservationConfig(
  R=6,
  sparse_backend="scatter",
  default_point_reduction="max",
  channel_reduction_overrides={"TRAILS": "sum"},
)
obs = AgentObservation(config)
tensor = obs.tensor()
metrics = obs.get_metrics()

Data & Analysis

AI & Learning

🔬 Research & Experiments

📖 Specialized Guides & Tutorials

Tutorials

Step-by-step guides for specific use cases:

  • Basic simulation setup
  • Custom agent implementation
  • Extending observation channels
  • Experiment management
  • Analysis and visualization

Analysis Techniques

Experiments

  • Experiments - Detailed experiment configurations
  • Parameter sweep examples
  • Comparative studies
  • Replication techniques

🔧 Technical Reference

API Documentation

  • Complete API Reference - All classes, methods, and functions
  • Module-specific API documentation
  • Type hints and signatures
  • Usage examples for each API

Configuration System

  • Configuration Guide - Comprehensive configuration reference
  • YAML configuration format
  • Parameter validation
  • Configuration management tools

Database & Persistence

🎯 Use Cases & Examples

Basic Usage Patterns

  1. Simple Simulation: Load config → Create environment → Add agents → Run simulation
  2. Custom Agents: Extend BaseAgent → Implement custom decision logic → Register behaviors
  3. Extended Observations: Create ChannelHandler → Register channel → Process observations
  4. Parameter Studies: Define parameter ranges → Run experiment → Analyze results

Advanced Scenarios

  • Multi-agent cooperation studies
  • Resource competition dynamics
  • Learning algorithm comparison
  • Emergent behavior analysis
  • Scalability testing

🤝 Contributing

We welcome contributions to both the platform and its documentation:

Ways to Contribute

  • Bug Reports: Use GitHub Issues for bugs
  • Feature Requests: Propose new features via GitHub Issues
  • Documentation: Improve existing docs or add new guides
  • Code Contributions: Submit pull requests for enhancements

Development Setup

  1. Fork the repository
  2. Clone your fork: git clone https://github.com/your-username/AgentFarm.git
  3. Create a virtual environment and install: pip install -r requirements.txt and pip install -e .
  4. Run tests: pytest (from the repository root)
  5. Submit pull request

Documentation Guidelines

  • Use clear, concise language
  • Include code examples where helpful
  • Follow existing documentation structure
  • Test examples to ensure they work
  • Update this README when adding new documentation

📞 Support & Community

Getting Help

  • Documentation: Start with this README and the guides listed above
  • Issues: Check existing GitHub Issues
  • Discussions: Use GitHub Discussions for questions and general discussion
  • Examples: Usage Examples, benchmark samples under benchmarks/examples/, and tests under tests/

Community Resources

  • Research code: farm/research/ (analysis helpers and experiment tooling)
  • Tutorials: Community-contributed tutorials and guides
  • Case Studies: Real-world applications and results

📋 Roadmap & Future Development

Planned Features

  • Enhanced visualization tools
  • Distributed simulation support
  • Additional agent types and behaviors
  • Advanced analysis frameworks
  • Integration with popular RL libraries

Research Directions

  • Complex social dynamics modeling
  • Evolutionary algorithm integration
  • Multi-objective optimization
  • Real-time adaptive systems

📄 License & Attribution

This project is part of the Dooders research initiative exploring complex adaptive systems through computational modeling.


🎓 Learning Path Recommendation:

  1. Start with Module Overview for high-level understanding
  2. Follow Usage Examples for hands-on experience
  3. Dive into Configuration Guide for customization
  4. Reference API Documentation for development
  5. Explore specialized guides for advanced topics

Happy simulating! 🚀