Skip to content

ennomuller/Stack-Scout-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸͺ Google Antigravity Workspace Template

Production-grade starter kit for autonomous AI agents on Google Antigravity.

Language: English | δΈ­ζ–‡οΌˆδ»“εΊ“δΈ»ι‘΅οΌ‰ | δΈ­ζ–‡ζ–‡ζ‘£ | EspaΓ±ol

License Gemini Architecture Memory

🌟 Project Intent

In a world full of AI IDEs, I want enterprise-grade architecture to be as simple as Clone β†’ Rename β†’ Prompt.

This project leverages IDE context awareness (via .cursorrules and .antigravity/rules.md) to pre-embed a complete cognitive architecture in the repo.

When you open this project, your IDE stops being just an editorβ€”it becomes an industry-savvy architect.

First principles:

  • Minimize repetition: the repo should encode defaults so setup is nearly zero.
  • Make intent explicit: capture architecture, context, and workflows in files, not tribal knowledge.
  • Treat the IDE as a teammate: contextual rules turn the editor into a proactive architect, not a passive tool.

Why do we need a thinking scaffold?

While building with Google Antigravity or Cursor, I found a pain point:

The IDE and models are powerful, but the empty project is too weak.

Every new project repeats the same boring setup:

  • "Should my code live in src or app?"
  • "How do I define utilities so Gemini recognizes them?"
  • "How do I help the AI remember prior context?"

This repetition wastes creative energy. My ideal workflow is: after a git clone, the IDE already knows what to do.

So I built this project: Antigravity Workspace Template.

⚑ Quick Start

Automated Installation (Recommended)

Linux / macOS:

# 1. Clone the template
git clone https://github.com/study8677/antigravity-workspace-template.git my-project
cd my-project

# 2. Run the installer
chmod +x install.sh
./install.sh

# 3. Configure your API keys
nano .env

# 4. Run the agent
source venv/bin/activate
python src/agent.py

Windows:

# 1. Clone the template
git clone https://github.com/study8677/antigravity-workspace-template.git my-project
cd my-project

# 2. Run the installer
install.bat

# 3. Configure your API keys (notepad .env)

# 4. Run the agent
python src/agent.py

Manual Installation

# 1. Clone the template
git clone https://github.com/study8677/antigravity-workspace-template.git my-project
cd my-project

# 2. Create virtual environment
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# 3. Install dependencies
pip install -r requirements.txt

# 4. Configure your API keys
cp .env.example .env  # (if available) or create .env manually
nano .env

# 5. Run the agent
python src/agent.py

That's it! The IDE auto-loads configuration via .cursorrules + .antigravity/rules.md. You're ready to prompt.

🎯 What Is This?

This is not another LangChain wrapper. It's a minimal, transparent workspace for building AI agents that:

  • 🧠 Have infinite memory (recursive summarization)
  • πŸ› οΈ Auto-discover tools from src/tools/
  • πŸ“š Auto-inject context from .context/
  • πŸ”Œ Connect to MCP servers seamlessly
  • πŸ€– Coordinate multiple specialist agents
  • πŸ“¦ Save outputs as artifacts (plans, logs, evidence)

Clone β†’ Rename β†’ Prompt. That's the workflow.

πŸš€ Key Features

Feature Description
🧠 Infinite Memory Recursive summarization compresses context automatically
🧠 True Thinking "Deep Think" step using Chain-of-Thought prompts before acting
πŸŽ“ Skills System Modular capabilities as folders (src/skills/) with auto-loading
πŸ› οΈ Universal Tools Drop Python functions in src/tools/ β†’ auto-discovered
πŸ“š Auto Context Add files to .context/ β†’ auto-injected into prompts
πŸ”Œ MCP Support Connect GitHub, databases, filesystems, custom servers
πŸ€– Swarm Agents Multi-agent orchestration with Router-Worker pattern
⚑ Gemini Native Optimized for Gemini 2.0 Flash
🌐 LLM Agnostic Use OpenAI, Azure, Ollama, or any OpenAI-compatible API
πŸ“‚ Artifact-First Every task produces plans, logs, and evidence
πŸ”’ Sandbox Execution Configurable code execution environments (local by default)

πŸ“š Documentation

Full documentation available in /docs/en/:

Sandbox Configuration (Zero-Config by default)

The sandbox lets the agent execute generated Python code safely and consistently. It defaults to a local subprocess with isolation and limits.

  • SANDBOX_TYPE: local (default) | docker (opt-in) | e2b (future)
  • SANDBOX_TIMEOUT_SEC: maximum execution time in seconds (default 30)
  • SANDBOX_MAX_OUTPUT_KB: truncate stdout/stderr to limit size (default 10)

Docker (opt-in) extra variables:

  • DOCKER_IMAGE (default python:3.11-slim)
  • DOCKER_NETWORK_ENABLED (false by default)
  • DOCKER_CPU_LIMIT (default 0.5 cores)
  • DOCKER_MEMORY_LIMIT (default 256m)

Example:

export SANDBOX_TYPE=local
export SANDBOX_TIMEOUT_SEC=30
export SANDBOX_MAX_OUTPUT_KB=10
# Docker mode
# export SANDBOX_TYPE=docker
# export DOCKER_IMAGE=python:3.11-slim
# export DOCKER_NETWORK_ENABLED=false
# export DOCKER_CPU_LIMIT=0.5
# export DOCKER_MEMORY_LIMIT=256m

πŸ—οΈ Project Structure

src/
β”œβ”€β”€ agent.py           # Main agent loop
β”œβ”€β”€ memory.py          # JSON memory manager
β”œβ”€β”€ mcp_client.py      # MCP integration
β”œβ”€β”€ swarm.py           # Multi-agent orchestration
β”œβ”€β”€ agents/            # Specialist agents
β”œβ”€β”€ tools/             # Your custom tools
└── skills/            # Modular skills (Zero-Config)

.context/             # Knowledge base (auto-injected)
.antigravity/         # Antigravity rules
artifacts/            # Outputs & evidence

πŸ’‘ Example: Build a Tool in 30 Seconds

# src/tools/my_tool.py
def analyze_sentiment(text: str) -> str:
    """Analyzes the sentiment of given text."""
    return "positive" if len(text) > 10 else "neutral"

Restart agent. Done! The tool is now available.

πŸ”Œ MCP Integration

Connect to external tools:

{
  "servers": [
    {
      "name": "github",
      "transport": "stdio",
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-github"],
      "enabled": true
    }
  ]
}

Agent automatically discovers and uses all MCP tools.

πŸ€– Multi-Agent Swarm

Decompose complex tasks:

from src.swarm import SwarmOrchestrator

swarm = SwarmOrchestrator()
result = swarm.execute("Build and review a calculator")

The swarm automatically:

  • πŸ“€ Routes to Coder, Reviewer, Researcher agents
  • 🧩 Synthesizes results
  • πŸ“‚ Saves artifacts

βœ… What's Complete

  • βœ… Phase 1-7: Foundation, DevOps, Memory, Tools, Swarm, Discovery
  • βœ… Phase 8: MCP Integration (fully implemented)
  • πŸš€ Phase 9: Enterprise Core (in progress)

πŸ†• Recent Updates

  • Added True Thinking: The agent now performs a real "Deep Think" step (Chain-of-Thought) before every action, generating a structured plan.
  • Added Skills System: New src/skills/ directory allows for modular, folder-based agent capabilities (Docs + Code).
  • Added local OpenAI-compatible backend support (e.g., Ollama) when no Google API key is provided.
  • Fixed .env loading so runs from the src/ folder still read the project-root config.
  • CLI entrypoints (agent.py and src/agent.py) now accept tasks via arguments AGENT_TASK.

See Roadmap for details.

🀝 Contributing

Ideas are contributions too! Open an issue to:

  • Report bugs
  • Suggest features
  • Propose architecture (Phase 9)

Or submit a PR to improve docs or code.

πŸ‘₯ Contributors

  • @devalexanderdaza β€” First contributor. Implemented demo tools, enhanced agent functionality, proposed the "Agent OS" roadmap, and completed MCP integration.
  • @Subham-KRLX β€” Added dynamic tools and context loading (Fixes #4) and the multi-agent cluster protocol (Fixes #6).

⭐ Star History

Star History Chart

πŸ“„ License

MIT License. See LICENSE for details.


Explore Full Documentation β†’

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published