diff --git a/.claude/skills/decocms-mcp-deploy/SKILL.md b/.claude/skills/decocms-mcp-deploy/SKILL.md index def997f4..ed76fffc 100644 --- a/.claude/skills/decocms-mcp-deploy/SKILL.md +++ b/.claude/skills/decocms-mcp-deploy/SKILL.md @@ -38,12 +38,12 @@ MCPs pointing to external servers (Cloudflare official, Grain, Apify, etc.). No ## Workflow Summary -| Workflow | Trigger | What it does | -|----------|---------|--------------| -| `checks.yml` | push/PR | fmt, lint, typecheck changed MCPs | -| `publish-registry.yml` | push to main, manual | Publish app.json MCPs to Mesh registry | -| `deploy.yml` | push to main, manual | Build + `deco deploy` for CF Worker MCPs | -| `publish-jsr.yml` | push to main (shared/ or openrouter/) | Publish packages to JSR | +| Workflow | Trigger | What it does | +| ---------------------- | ------------------------------------- | ---------------------------------------- | +| `checks.yml` | push/PR | fmt, lint, typecheck changed MCPs | +| `publish-registry.yml` | push to main, manual | Publish app.json MCPs to Mesh registry | +| `deploy.yml` | push to main, manual | Build + `deco deploy` for CF Worker MCPs | +| `publish-jsr.yml` | push to main (shared/ or openrouter/) | Publish packages to JSR | No `deploy-preview.yml` — preview deploys on PRs were removed. @@ -54,6 +54,7 @@ No `deploy-preview.yml` — preview deploys on PRs were removed. ### `detect-changed-mcps.ts` Detects which MCPs have changed based on: + 1. MCPs in `deploy.json` with `watch` patterns matching changed files 2. Registry-only MCPs (have `app.json` but NOT in `deploy.json`) — detected if any file in their dir changed @@ -95,15 +96,18 @@ All have `wrangler.toml` in their directory. ## Adding a New MCP to the Pipeline ### Custom server (kubernetes-bun) + 1. Add to `deploy.json` with `platformName: kubernetes-bun` 2. Add `app.json` — `publish-registry.yml` handles it automatically ### CF Worker MCP + 1. Add to `deploy.json` with `platformName: cloudflare-workers` 2. Ensure `wrangler.toml` exists in the MCP dir 3. Optionally add `app.json` for registry metadata ### Official server (app.json only) + 1. Just create `app.json` — no `deploy.json` entry needed 2. `publish-registry.yml` auto-detects it as a registry-only MCP @@ -131,8 +135,8 @@ mcps: "perplexity,slack-mcp" ## Secrets Required -| Secret | Used by | -|--------|---------| -| `DECO_DEPLOY_TOKEN` | deploy.yml (deco deploy CLI) | -| `PUBLISH_API_KEY` | publish-registry.yml | +| Secret | Used by | +| ------------------------- | ------------------------------------------ | +| `DECO_DEPLOY_TOKEN` | deploy.yml (deco deploy CLI) | +| `PUBLISH_API_KEY` | publish-registry.yml | | `OPENAI_API_KEY` + others | deploy.yml (env vars passed to CF Workers) | diff --git a/.claude/skills/decocms-mcp-development/SKILL.md b/.claude/skills/decocms-mcp-development/SKILL.md index b42c49b2..0917ba67 100644 --- a/.claude/skills/decocms-mcp-development/SKILL.md +++ b/.claude/skills/decocms-mcp-development/SKILL.md @@ -22,6 +22,7 @@ Working directory: `/Users/jonasjesus/conductor/workspaces/mcps/san-antonio` (or ### Type 1: Official Server (app.json only) The MCP runs on an external server (Cloudflare, Grain, GitHub, etc.). We only provide: + - `app.json` — connection URL, auth, metadata - `README.md` — optional @@ -32,6 +33,7 @@ No `package.json`, no `deploy.json` entry, no workspace entry in root `package.j ### Type 2: Custom Server (deco HTTP) We build and host the server. Files: + ``` / app.json # registry metadata + connection URL @@ -67,7 +69,9 @@ const runtime = withRuntime({ tools: (env: Env) => tools.map((createTool) => createTool(env)), }); -if (runtime.fetch) { serve(runtime.fetch); } +if (runtime.fetch) { + serve(runtime.fetch); +} ``` ### `shared/deco.gen.ts` @@ -205,12 +209,12 @@ When an official HTTP server exists (e.g., `https://api.example.com/mcp`): ## Key Packages -| Package | Purpose | -|---------|---------| -| `@decocms/runtime` | `withRuntime`, `createPrivateTool` | -| `@decocms/mcps-shared` | `serve` utility | -| `zod` | Input schema validation | -| `undici` | Proxy-aware fetch, SSE streaming | +| Package | Purpose | +| ---------------------- | ---------------------------------- | +| `@decocms/runtime` | `withRuntime`, `createPrivateTool` | +| `@decocms/mcps-shared` | `serve` utility | +| `zod` | Input schema validation | +| `undici` | Proxy-aware fetch, SSE streaming | ## Common Patterns diff --git a/.github/workflows/README.md b/.github/workflows/README.md index 5a5b4ad9..603fca1c 100644 --- a/.github/workflows/README.md +++ b/.github/workflows/README.md @@ -9,6 +9,7 @@ This directory contains the centralized deployment workflows for all MCPs in the **Trigger**: Push to `main` branch **What it does**: + 1. Detects which MCP directories have changed 2. Builds and deploys only the changed MCPs to production 3. Uses the `deco deploy` command for each changed MCP @@ -20,6 +21,7 @@ This directory contains the centralized deployment workflows for all MCPs in the **Trigger**: Pull requests to `main` branch **What it does**: + 1. Detects which MCP directories have changed in the PR 2. Builds and deploys preview versions (without promoting) 3. Comments on the PR with preview URLs for each deployed MCP @@ -45,11 +47,13 @@ Both workflows use **automatic MCP discovery** powered by a TypeScript script (` ## Adding a New MCP **No configuration needed!** Just create a new directory with a `package.json` and the workflows will automatically: + - Detect it as an MCP - Monitor it for changes - Deploy it when changes occur Example: + ```bash mkdir my-new-mcp cd my-new-mcp @@ -73,6 +77,7 @@ Both workflows use GitHub Actions matrix strategy to deploy multiple MCPs in par ## Failure Handling The workflows use `fail-fast: false` in their matrix strategy, which means: + - If one MCP fails to deploy, others will continue - You'll get individual success/failure notifications for each MCP - The overall workflow will be marked as failed if any MCP fails @@ -80,6 +85,7 @@ The workflows use `fail-fast: false` in their matrix strategy, which means: ## Deployment Script Both workflows use the TypeScript deployment script located at `scripts/deploy.ts`. This script: + - Changes to the MCP directory - Installs dependencies with Bun - Runs the build script @@ -89,4 +95,3 @@ Both workflows use the TypeScript deployment script located at `scripts/deploy.t ## Artifacts The preview deployment workflow creates temporary artifacts containing deployment information (MCP name and preview URL) which are used to construct the PR comment. These artifacts are automatically cleaned up after 1 day. - diff --git a/.github/workflows/SECRETS.md b/.github/workflows/SECRETS.md index 8898542b..ee3d19f3 100644 --- a/.github/workflows/SECRETS.md +++ b/.github/workflows/SECRETS.md @@ -5,6 +5,7 @@ This document lists all secrets required to deploy MCPs via GitHub Actions. ## Required Secrets ### `DECO_DEPLOY_TOKEN` + - **Used by**: All MCPs - **Description**: Authentication token for Deco CLI - **How to obtain**: Generated via Deco CLI or dashboard @@ -12,28 +13,34 @@ This document lists all secrets required to deploy MCPs via GitHub Actions. ## Optional Secrets (per MCP) ### MCP: `sora` + - **`OPENAI_API_KEY`**: OpenAI API key for Sora model - Obtain at: https://platform.openai.com/api-keys ### MCP: `veo` + - **`GOOGLE_GENAI_API_KEY`**: Google Generative AI API key for Veo model - Obtain at: https://aistudio.google.com/app/apikey - ⚠️ **Important**: The code expects `GOOGLE_GENAI_API_KEY`, not `VEO_TOKEN` ### MCP: `nanobanana` + - **`NANOBANANA_API_KEY`**: API key for Nanobanana service (OpenRouter) - Obtain at: https://openrouter.ai/keys ### MCP: `openrouter` + - **`OPENROUTER_API_KEY`**: API key used by OpenRouter MCP - Obtain at: https://openrouter.ai/keys ### MCP: `pinecone` + - **`PINECONE_TOKEN`**: Pinecone API token - Obtain at: https://app.pinecone.io/ - **`PINECONE_INDEX`**: Pinecone index name (if required) ### MCP: `meta-ads` + - **`META_ACCESS_TOKEN`**: Facebook Access Token for Meta Ads API - Obtain at: https://developers.facebook.com/tools/explorer/ - Select your app and generate token with required permissions: @@ -64,12 +71,13 @@ When you need to add support for a new secret: 2. **Update the deploy script** (`scripts/deploy.ts`): - Add the variable name to the `envVarsToPass` array (around line 139) + ```typescript const envVarsToPass = [ "OPENAI_API_KEY", "GOOGLE_GENAI_API_KEY", "NANOBANANA_API_KEY", - "YOUR_NEW_SECRET", // <- Add here + "YOUR_NEW_SECRET", // <- Add here // ... ]; ``` @@ -81,7 +89,7 @@ When you need to add support for a new secret: env: DECO_DEPLOY_TOKEN: ${{ secrets.DECO_DEPLOY_TOKEN }} OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} - YOUR_NEW_SECRET: ${{ secrets.YOUR_NEW_SECRET }} # <- Add here + YOUR_NEW_SECRET: ${{ secrets.YOUR_NEW_SECRET }} # <- Add here ``` ⚠️ **Note**: Yes, you still need to edit the workflows, but now it's simpler and centralized. Just add one line in the `env:` section. @@ -89,6 +97,7 @@ When you need to add support for a new secret: ## ⚠️ Attention: Rename VEO_TOKEN Secret If you have a secret called `VEO_TOKEN`, you need to: + 1. Create a new secret called `GOOGLE_GENAI_API_KEY` with the same value as `VEO_TOKEN` 2. Delete the `VEO_TOKEN` secret (or keep it if you prefer) diff --git a/.github/workflows/deploy.yml b/.github/workflows/deploy.yml index 31973557..c56be3db 100644 --- a/.github/workflows/deploy.yml +++ b/.github/workflows/deploy.yml @@ -7,7 +7,7 @@ on: description: 'MCPs to deploy (comma-separated, e.g. "discord-read,slack-mcp"). Leave empty to detect from changes.' required: false type: string - default: '' + default: "" push: branches: - main diff --git a/.github/workflows/publish-registry.yml b/.github/workflows/publish-registry.yml index 00226885..7d1071dd 100644 --- a/.github/workflows/publish-registry.yml +++ b/.github/workflows/publish-registry.yml @@ -7,9 +7,9 @@ on: description: 'MCPs to publish (comma-separated, e.g. "discord-read,slack-mcp"). Leave empty to detect from changes.' required: false type: string - default: '' + default: "" dry_run: - description: 'Dry run (preview payloads without publishing)' + description: "Dry run (preview payloads without publishing)" required: false type: boolean default: false diff --git a/README.md b/README.md index 2f15e802..1036e832 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# mcps +# mcps First-party MCPs maintained by the decocms team. diff --git a/airtable/package.json b/airtable/package.json index 3f369a40..bb8ab640 100644 --- a/airtable/package.json +++ b/airtable/package.json @@ -1,8 +1,8 @@ { "name": "@decocms/airtable", "version": "1.0.0", - "description": "Airtable MCP for database operations", "private": true, + "description": "Airtable MCP for database operations", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/amplitude/README.md b/amplitude/README.md index 04096011..1ad26e5e 100644 --- a/amplitude/README.md +++ b/amplitude/README.md @@ -5,11 +5,13 @@ **Amplitude MCP** is a Model Context Protocol (MCP) server that connects AI assistants to Amplitude's product analytics platform for querying events, charts, cohorts, and user insights. ### Purpose + - Query Amplitude analytics data including event trends, funnels, and retention metrics - Access chart and dashboard data to surface product insights from AI tools - Explore user cohorts and behavioral segments programmatically ### Key Features + - 📈 Query event data, trends, and conversion funnels - 🔁 Retrieve retention and engagement metrics - 👥 Access and manage user cohorts and segments diff --git a/apify/README.md b/apify/README.md index 81e11d29..d86710f3 100644 --- a/apify/README.md +++ b/apify/README.md @@ -1,4 +1,4 @@ -# Apify MCP +# Apify MCP ## Project Description @@ -7,6 +7,7 @@ ### Purpose This MCP server allows client applications to: + - List available Apify actors - Get details about specific actors - Run actors synchronously or asynchronously @@ -34,24 +35,29 @@ Get your Apify API token at: https://console.apify.com/account/integrations ## Available Tools ### `list_actors` + List all actors accessible to the user. ### `get_actor` + Get details of a specific actor by ID or name. ### `list_actor_runs` + List runs of a specific actor with filtering options. ### `get_actor_run` + Get details of a specific actor run, optionally including dataset items. ### `run_actor_sync` + Run an actor synchronously and return dataset items when complete. ### `run_actor_async` + Run an actor asynchronously and return immediately with run ID. ## License MIT - diff --git a/apify/app.json b/apify/app.json index 8f2e749c..1e0e3c00 100644 --- a/apify/app.json +++ b/apify/app.json @@ -18,4 +18,3 @@ "mesh_description": "The Apify MCP provides comprehensive integration with the Apify platform, enabling AI agents to run web scraping actors, manage automation tasks, and extract data from websites. This MCP allows listing available actors, getting actor details, running actors synchronously or asynchronously, and retrieving run results with dataset items. Perfect for building intelligent automation workflows, data collection pipelines, and web monitoring solutions. Authentication is done via Apify API token passed in the Authorization header." } } - diff --git a/atlassian-rovo/README.md b/atlassian-rovo/README.md index 0ee37cfb..8bc9d01f 100644 --- a/atlassian-rovo/README.md +++ b/atlassian-rovo/README.md @@ -8,9 +8,9 @@ The Atlassian Rovo MCP Server is a cloud-based bridge between your Atlassian Clo With the Atlassian Rovo MCP Server, you can: -* **Summarize and search** Jira, Compass, and Confluence content without switching tools. -* **Create and update** issues or pages based on natural language commands. -* **Automate repetitive work**, like generating tickets from meeting notes or specs. +- **Summarize and search** Jira, Compass, and Confluence content without switching tools. +- **Create and update** issues or pages based on natural language commands. +- **Automate repetitive work**, like generating tickets from meeting notes or specs. It's designed for developers, content creators, and project teams who use IDEs or AI platforms and want to work with Atlassian data without constantly context switching. @@ -20,12 +20,12 @@ It's designed for developers, content creators, and project teams who use IDEs o The Atlassian Rovo MCP Server supports several clients, including: -* [OpenAI ChatGPT](https://platform.openai.com/docs/guides/tools-connectors-mcp) -* [Claude](https://code.claude.com/docs/en/mcp) -* [GitHub Copilot CLI](https://docs.github.com/en/copilot/concepts/agents/about-copilot-cli) -* [Gemini CLI](https://github.com/google-gemini/gemini-cli/blob/main/docs/tools/mcp-server.md) -* [Amazon Quick Suite](https://docs.aws.amazon.com/quicksuite/latest/userguide/mcp-integration.html) -* [Visual Studio Code](https://code.visualstudio.com/) +- [OpenAI ChatGPT](https://platform.openai.com/docs/guides/tools-connectors-mcp) +- [Claude](https://code.claude.com/docs/en/mcp) +- [GitHub Copilot CLI](https://docs.github.com/en/copilot/concepts/agents/about-copilot-cli) +- [Gemini CLI](https://github.com/google-gemini/gemini-cli/blob/main/docs/tools/mcp-server.md) +- [Amazon Quick Suite](https://docs.aws.amazon.com/quicksuite/latest/userguide/mcp-integration.html) +- [Visual Studio Code](https://code.visualstudio.com/) The Atlassian Rovo MCP Server also supports any **local MCP-compatible client** that can run on `localhost` and connect to the server via the `mcp-remote` proxy. This enables custom or third-party integrations that follow the MCP specification. @@ -43,16 +43,16 @@ Before connecting to the Atlassian Rovo MCP Server, review the setup requirement #### For supported clients -* An **Atlassian Cloud site** with Jira, Compass, and/or Confluence -* Access to **the client of choice** -* A modern browser to complete the OAuth 2.1 authorization flow, or API token credentials for headless authentication +- An **Atlassian Cloud site** with Jira, Compass, and/or Confluence +- Access to **the client of choice** +- A modern browser to complete the OAuth 2.1 authorization flow, or API token credentials for headless authentication #### For IDEs or local clients (Desktop setup) -* An **Atlassian Cloud site** with Jira, Compass, and/or Confluence -* A supported IDE (for example, **Claude desktop, VS Code, or Cursor**) or a custom MCP-compatible client -* **Node.js v18+** installed to run the local MCP proxy (`mcp-remote`) -* A modern browser for completing OAuth login, or API token credentials for headless authentication +- An **Atlassian Cloud site** with Jira, Compass, and/or Confluence +- A supported IDE (for example, **Claude desktop, VS Code, or Cursor**) or a custom MCP-compatible client +- **Node.js v18+** installed to run the local MCP proxy (`mcp-remote`) +- A modern browser for completing OAuth login, or API token credentials for headless authentication --- @@ -60,15 +60,15 @@ Before connecting to the Atlassian Rovo MCP Server, review the setup requirement Security is a core focus of the Atlassian Rovo MCP Server: -* All traffic is encrypted via HTTPS using TLS 1.2 or later. -* OAuth 2.1 and API token authentication provide secure access control. -* Data access respects Jira, Compass, and Confluence user permissions. -* If your organization uses IP allowlisting for Atlassian Cloud products, tool calls made through the Atlassian Rovo MCP Server also honor those IP rules. +- All traffic is encrypted via HTTPS using TLS 1.2 or later. +- OAuth 2.1 and API token authentication provide secure access control. +- Data access respects Jira, Compass, and Confluence user permissions. +- If your organization uses IP allowlisting for Atlassian Cloud products, tool calls made through the Atlassian Rovo MCP Server also honor those IP rules. For a deeper overview of the security model and admin controls, see: -* [Understand Atlassian Rovo MCP Server](https://support.atlassian.com/security-and-access-policies/docs/understand-atlassian-rovo-mcp-server/) -* [Control Atlassian Rovo MCP Server settings](https://support.atlassian.com/security-and-access-policies/docs/control-atlassian-rovo-mcp-server-settings/) +- [Understand Atlassian Rovo MCP Server](https://support.atlassian.com/security-and-access-policies/docs/understand-atlassian-rovo-mcp-server/) +- [Control Atlassian Rovo MCP Server settings](https://support.atlassian.com/security-and-access-policies/docs/control-atlassian-rovo-mcp-server-settings/) --- @@ -77,9 +77,11 @@ For a deeper overview of the security model and admin controls, see: ### Architecture and communication 1. A supported client connects to the server endpoint: + ``` https://mcp.atlassian.com/v1/mcp ``` + 2. Depending on your setup, a secure browser-based OAuth 2.1 flow is triggered, or API token authentication is used. 3. Once authorized, the client streams contextual data and receives real-time responses from Jira, Compass, or Confluence. @@ -94,10 +96,10 @@ Access is granted only to data that the user already has permission to view in A API token authentication is available for headless or long-running client setups. -* **Admin enablement required:** An organization admin must enable API token authentication for Rovo MCP Server. -* **Scoped token required:** Use a Rovo MCP scoped API token for the required tools and data access. -* **Configuration guide:** [Configure authentication via API token](https://support.atlassian.com/atlassian-rovo-mcp-server/docs/configuring-authentication-via-api-token/) -* **Admin setting reference:** [Control Atlassian Rovo MCP Server settings - Configure authentication](https://support.atlassian.com/security-and-access-policies/docs/control-atlassian-rovo-mcp-server-settings/#Configure-authentication) +- **Admin enablement required:** An organization admin must enable API token authentication for Rovo MCP Server. +- **Scoped token required:** Use a Rovo MCP scoped API token for the required tools and data access. +- **Configuration guide:** [Configure authentication via API token](https://support.atlassian.com/atlassian-rovo-mcp-server/docs/configuring-authentication-via-api-token/) +- **Admin setting reference:** [Control Atlassian Rovo MCP Server settings - Configure authentication](https://support.atlassian.com/security-and-access-policies/docs/control-atlassian-rovo-mcp-server-settings/#Configure-authentication) --- @@ -107,26 +109,26 @@ Once connected, you can perform a variety of useful tasks from within your suppo ### Jira workflows -* **Search**: "Find all open bugs in Project Alpha." -* **Create/update**: "Create a story titled 'Redesign onboarding'." -* **Bulk create**: "Make five Jira issues from these notes." +- **Search**: "Find all open bugs in Project Alpha." +- **Create/update**: "Create a story titled 'Redesign onboarding'." +- **Bulk create**: "Make five Jira issues from these notes." ### Confluence workflows -* **Summarize**: "Summarize the Q2 planning page." -* **Create**: "Create a page titled 'Team Goals Q3'." -* **Navigate**: "What spaces do I have access to?" +- **Summarize**: "Summarize the Q2 planning page." +- **Create**: "Create a page titled 'Team Goals Q3'." +- **Navigate**: "What spaces do I have access to?" ### Compass workflows -* **Create**: "Create a service component based on the current repository." -* **Bulk create**: "Import components and custom fields from this CSV/JSON" -* **Query**: "What depends on the `api-gateway` service?" +- **Create**: "Create a service component based on the current repository." +- **Bulk create**: "Import components and custom fields from this CSV/JSON" +- **Query**: "What depends on the `api-gateway` service?" ### Combined tasks -* **Link content**: "Link these three Jira tickets to the 'Release Plan' page." -* **Find documentation**: "Fetch the Confluence documentation page linked to this Compass component." +- **Link content**: "Link these three Jira tickets to the 'Release Plan' page." +- **Find documentation**: "Fetch the Confluence documentation page linked to this Compass component." > [!NOTE] > Actual capabilities vary, depending on your permission level and client platform. @@ -139,7 +141,7 @@ Once connected, you can perform a variety of useful tasks from within your suppo Update your [AGENTS.md](https://agents.md/) with the Markdown below to reduce discovery tool calls, save time and tokens, and set maximum search results. -``` MD +```MD ## Atlassian Rovo MCP When connected to atlassian-rovo-mcp: @@ -161,40 +163,41 @@ For [Cursor](https://cursor.com/marketplace/atlassian), skills are part of the m If you're an admin preparing your organization to use the Atlassian Rovo MCP Server, review these key considerations. For more detailed admin guidance, see: -* [Understand Atlassian Rovo MCP server](https://support.atlassian.com/security-and-access-policies/docs/understand-atlassian-rovo-mcp-server/) -* [Control Atlassian Rovo MCP server settings](https://support.atlassian.com/security-and-access-policies/docs/control-atlassian-rovo-mcp-server-settings/) -* [Manage Atlassian Rovo MCP server](https://support.atlassian.com/security-and-access-policies/docs/manage-atlassian-rovo-mcp-server/) -* [Monitor Atlassian Rovo MCP server activity](https://support.atlassian.com/security-and-access-policies/docs/monitor-atlassian-rovo-mcp-server-activity/) +- [Understand Atlassian Rovo MCP server](https://support.atlassian.com/security-and-access-policies/docs/understand-atlassian-rovo-mcp-server/) +- [Control Atlassian Rovo MCP server settings](https://support.atlassian.com/security-and-access-policies/docs/control-atlassian-rovo-mcp-server-settings/) +- [Manage Atlassian Rovo MCP server](https://support.atlassian.com/security-and-access-policies/docs/manage-atlassian-rovo-mcp-server/) +- [Monitor Atlassian Rovo MCP server activity](https://support.atlassian.com/security-and-access-policies/docs/monitor-atlassian-rovo-mcp-server-activity/) ### Installation and access -* **Not a Marketplace App:** -The Atlassian Rovo MCP Server is _not_ installed via the Atlassian Marketplace or the **Manage apps** screen. Instead, it is installed automatically the first time a user completes the OAuth 2.1 (3LO) consent flow (just-in-time or "lazy loading" installation). -* **First-time installation requirements:** -The first user to complete the 3LO consent flow for your site must have access to the Atlassian apps requested by the MCP scopes (for example, Jira and/or Confluence). This ensures the MCP app is registered with the correct permissions for your site. -* **Subsequent user access:** -After the initial install, users with access to only one Atlassian app (for example, just Jira or just Confluence) can also complete the 3LO flow to access that Atlassian app through MCP. +- **Not a Marketplace App:** + The Atlassian Rovo MCP Server is _not_ installed via the Atlassian Marketplace or the **Manage apps** screen. Instead, it is installed automatically the first time a user completes the OAuth 2.1 (3LO) consent flow (just-in-time or "lazy loading" installation). +- **First-time installation requirements:** + The first user to complete the 3LO consent flow for your site must have access to the Atlassian apps requested by the MCP scopes (for example, Jira and/or Confluence). This ensures the MCP app is registered with the correct permissions for your site. +- **Subsequent user access:** + After the initial install, users with access to only one Atlassian app (for example, just Jira or just Confluence) can also complete the 3LO flow to access that Atlassian app through MCP. ### Manage, monitor, and revoke access -* **Admin controls:** -Site and organization admins can manage, review, or revoke the MCP app's access from [Manage your organization's Marketplace and third-party apps](https://support.atlassian.com/security-and-access-policies/docs/manage-your-users-third-party-apps/). The app appears in your site's **Connected apps** list after the first successful 3LO consent. -* **End-user controls:** -Individual users can revoke their own app authorizations from their profile settings. -* **Domain and IP controls:** -Use the **Rovo MCP server** settings page in Atlassian Administration to control which external AI tools and domains are allowed to connect. For details, see [Available Atlassian Rovo MCP server domains](https://support.atlassian.com/security-and-access-policies/docs/available-atlassian-rovo-mcp-server-domains/). If your organization uses IP allowlisting for Atlassian Cloud apps, requests made through the Atlassian Rovo MCP Server must originate from an IP address that is allowed by your organization's IP allowlist for the relevant Atlassian app. For configuration details, see [Specify IP addresses for app access](https://support.atlassian.com/security-and-access-policies/docs/specify-ip-addresses-for-product-access/). -* **Audit logging:** To support monitoring and compliance, key actions performed via the Atlassian Rovo MCP Server are logged in your organization's audit log. Admins can review these logs in Atlassian Administration. For more information, see [Monitor Atlassian Rovo MCP server activity](https://support.atlassian.com/security-and-access-policies/docs/monitor-atlassian-rovo-mcp-server-activity/). +- **Admin controls:** + Site and organization admins can manage, review, or revoke the MCP app's access from [Manage your organization's Marketplace and third-party apps](https://support.atlassian.com/security-and-access-policies/docs/manage-your-users-third-party-apps/). The app appears in your site's **Connected apps** list after the first successful 3LO consent. +- **End-user controls:** + Individual users can revoke their own app authorizations from their profile settings. +- **Domain and IP controls:** + Use the **Rovo MCP server** settings page in Atlassian Administration to control which external AI tools and domains are allowed to connect. For details, see [Available Atlassian Rovo MCP server domains](https://support.atlassian.com/security-and-access-policies/docs/available-atlassian-rovo-mcp-server-domains/). If your organization uses IP allowlisting for Atlassian Cloud apps, requests made through the Atlassian Rovo MCP Server must originate from an IP address that is allowed by your organization's IP allowlist for the relevant Atlassian app. For configuration details, see [Specify IP addresses for app access](https://support.atlassian.com/security-and-access-policies/docs/specify-ip-addresses-for-product-access/). +- **Audit logging:** To support monitoring and compliance, key actions performed via the Atlassian Rovo MCP Server are logged in your organization's audit log. Admins can review these logs in Atlassian Administration. For more information, see [Monitor Atlassian Rovo MCP server activity](https://support.atlassian.com/security-and-access-policies/docs/monitor-atlassian-rovo-mcp-server-activity/). ### Troubleshooting common issues -* **"Your site admin must authorize this app" error:** -A site admin must complete the 3LO consent flow before anyone else can use the MCP app. See ["Your site admin must authorize this app"](https://support.atlassian.com/atlassian-cloud/kb/your-site-admin-must-authorize-this-app-error-in-atlassian-cloud-apps/) error in Atlassian Cloud apps for more details. -* **"You don't have permission to connect from this IP address. Please ask your admin for access."** -This usually indicates that IP allowlisting is enabled and the user's current IP address isn't allowed to access Jira, Confluence, Compass, or Rovo via the Atlassian Rovo MCP Server. Ask your site or organization admin to review the IP allowlist configuration and add the relevant network or VPN IP ranges if appropriate. -* **App not appearing in Connected apps:** -Ensure the user is using the correct Atlassian account and site, and confirm the app is requesting the correct Atlassian app scopes (for example, Jira scopes). If issues persist, check [Manage your organization's Marketplace and third-party apps](https://support.atlassian.com/security-and-access-policies/docs/manage-your-users-third-party-apps/) or contact Atlassian Support. Also verify the user's Jira, Confluence, or Compass permissions in Atlassian Administration. +- **"Your site admin must authorize this app" error:** + A site admin must complete the 3LO consent flow before anyone else can use the MCP app. See ["Your site admin must authorize this app"](https://support.atlassian.com/atlassian-cloud/kb/your-site-admin-must-authorize-this-app-error-in-atlassian-cloud-apps/) error in Atlassian Cloud apps for more details. +- **"You don't have permission to connect from this IP address. Please ask your admin for access."** + This usually indicates that IP allowlisting is enabled and the user's current IP address isn't allowed to access Jira, Confluence, Compass, or Rovo via the Atlassian Rovo MCP Server. Ask your site or organization admin to review the IP allowlist configuration and add the relevant network or VPN IP ranges if appropriate. +- **App not appearing in Connected apps:** + Ensure the user is using the correct Atlassian account and site, and confirm the app is requesting the correct Atlassian app scopes (for example, Jira scopes). If issues persist, check [Manage your organization's Marketplace and third-party apps](https://support.atlassian.com/security-and-access-policies/docs/manage-your-users-third-party-apps/) or contact Atlassian Support. Also verify the user's Jira, Confluence, or Compass permissions in Atlassian Administration. ## Security + Model Context Protocol (MCP) lets AI agents connect to tools and Atlassian data using your account’s permissions, which creates powerful workflows but also structural risks. Any MCP client or server you enable (e.g., IDE plugins, desktop apps, hosted MCP servers, “one-click” integrations) can cause an AI agent to perform actions on your behalf. Large Language models (LLMs) are vulnerable to [prompt injection](https://owasp.org/www-community/attacks/PromptInjection) and related attacks (such as [indirect prompt injection](https://owasp.org/www-community/attacks/PromptInjection) and [tool poisoning](https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks)). These attacks can instruct the agent to exfiltrate data or make unintended changes without explicit requests. @@ -202,7 +205,9 @@ Large Language models (LLMs) are vulnerable to [prompt injection](https://owasp. To reduce risk, only use trusted MCP clients and servers, carefully review which tools and data each agent can access, and apply least privilege (scoped tokens, minimal project/workspace access). For any high‑impact or destructive action, require human confirmation and monitor audit logs for unusual activity. We strongly recommend reviewing Atlassian’s guidance on MCP risks at [MCP Clients: Understanding the potential security risks](https://www.atlassian.com/blog/artificial-intelligence/mcp-risk-awareness) ## Support and feedback + Your feedback plays a crucial role in shaping the Atlassian Rovo MCP Server. If you encounter bugs, limitations, or have suggestions: + - Visit the [Atlassian Support Portal](https://support.atlassian.com/) Portal to report issues and feature requests. - Share your experiences and questions on the [Atlassian Community](https://community.atlassian.com/) and developer-related asks on the [Developer's one](https://developer.atlassian.com/). - Go to our [Ecosystem Developer Portal](https://ecosystem.atlassian.net/servicedesk/customer/portal/14/user/login?destination=portal%2F14) if you are building an app and found a bug/issue or have suggestions. diff --git a/automation-workflows/README.md b/automation-workflows/README.md index c610c3aa..e54d4cec 100644 --- a/automation-workflows/README.md +++ b/automation-workflows/README.md @@ -5,11 +5,13 @@ **Make.com MCP** is a Model Context Protocol (MCP) server that enables AI assistants to interact with Make.com (formerly Integromat) to build, manage, and monitor automation workflows. ### Purpose + - Create and manage Make.com scenarios and automation workflows from AI tools - Trigger existing scenarios and inspect their execution history - Manage connections, data stores, and other Make.com resources programmatically ### Key Features + - 🔄 List, create, and update automation scenarios - ▶️ Execute scenarios and monitor run history - 🔗 Manage app connections and API keys diff --git a/axiom/README.md b/axiom/README.md index 1501c3a9..7a68c5e5 100644 --- a/axiom/README.md +++ b/axiom/README.md @@ -14,6 +14,7 @@ Model Context Protocol Server ━━━━━━━━━━━━━━━━ ``` # axiom-mcp + The Axiom MCP Server connects AI assistants to your Axiom observability data using the Model Context Protocol (MCP). This repository contains: - A Cloudflare Workers app that hosts the MCP server (`apps/mcp`). @@ -24,10 +25,11 @@ For installation, setup, supported tools, authentication, and client-specific in https://axiom.co/docs/console/intelligence/mcp-server#axiom-mcp-server Issues and contributions are welcome. - - CSV formatting for tabular data instead of verbose JSON - - Automatic `maxBinAutoGroups` for time-series aggregations - - Intelligent result shaping that prioritizes important fields - - Adaptive truncation based on data volume + +- CSV formatting for tabular data instead of verbose JSON +- Automatic `maxBinAutoGroups` for time-series aggregations +- Intelligent result shaping that prioritizes important fields +- Adaptive truncation based on data volume ## Runtime URL Parameters @@ -42,12 +44,12 @@ Example connection URL (MCP Inspector): http://localhost:8788/sse?org-id=&max-age=500&with-otel=1 ``` - ## Troubleshooting ### Connection Issues Remote MCP connections are still early. If you experience issues: + 1. Try restarting your client 2. Disable and re-enable the Axiom MCP server 3. Check your authentication credentials @@ -58,6 +60,7 @@ Remote MCP connections are still early. If you experience issues: - **OAuth**: Ensure you're logged into Axiom in your browser OR + - **Personal Token**: Verify your token starts with `xapt-` and hasn't expired - **Organization ID**: Must match the organization that issued the token diff --git a/axiom/app.json b/axiom/app.json index bf8b3670..ebfd393f 100644 --- a/axiom/app.json +++ b/axiom/app.json @@ -13,7 +13,15 @@ "metadata": { "categories": ["Data", "Monitoring", "Analytics"], "official": true, - "tags": ["axiom", "apl", "datasets", "queries", "anomaly-detection", "monitoring", "observability"], + "tags": [ + "axiom", + "apl", + "datasets", + "queries", + "anomaly-detection", + "monitoring", + "observability" + ], "short_description": "List datasets, run APL queries, and explore anomalies and monitoring data", "mesh_description": "Provides access to the Axiom observability platform. It enables users to list datasets and schemas, run APL queries, and use prompts for exploration, anomaly detection, and monitoring. Axiom is a cloud-native observability platform for logs, traces, and metrics. Perfect for DevOps and SRE teams who want AI-powered log analysis and infrastructure monitoring." } diff --git a/blockscout/README.md b/blockscout/README.md index adf95de0..c452e012 100644 --- a/blockscout/README.md +++ b/blockscout/README.md @@ -68,16 +68,16 @@ Use [this deeplink](https://cursor.com/en/install-mcp?name=blockscout&config=eyJ 1. Add the following configuration to your `~/.gemini/settings.json` file: - ```json - { - "mcpServers": { - "blockscout": { - "httpUrl": "https://mcp.blockscout.com/mcp", - "timeout": 180000 - } - } - } - ``` + ```json + { + "mcpServers": { + "blockscout": { + "httpUrl": "https://mcp.blockscout.com/mcp", + "timeout": 180000 + } + } + } + ``` 2. For detailed Gemini CLI MCP server configuration instructions, see the [official documentation](https://github.com/google-gemini/gemini-cli/blob/main/docs/tools/mcp-server.md). @@ -96,10 +96,7 @@ If you want to run the server locally for development purposes: "mcpServers": { "blockscout": { "command": "docker", - "args": [ - "run", "--rm", "-i", - "ghcr.io/blockscout/mcp-server:latest" - ] + "args": ["run", "--rm", "-i", "ghcr.io/blockscout/mcp-server:latest"] } } } @@ -171,7 +168,7 @@ When the most recent reward distribution of Kinto token was made to the wallet ``` ```plaintext -Which methods of `0x1c479675ad559DC151F6Ec7ed3FbF8ceE79582B6` on the Ethereum +Which methods of `0x1c479675ad559DC151F6Ec7ed3FbF8ceE79582B6` on the Ethereum mainnet could emit `SequencerBatchDelivered`? ``` diff --git a/blog-post-generator/app.json b/blog-post-generator/app.json index eb5ac81d..c0b4f28c 100644 --- a/blog-post-generator/app.json +++ b/blog-post-generator/app.json @@ -17,4 +17,3 @@ "mesh_description": "The Blog Post Generator MCP leverages n8n workflow automation to generate high-quality blog post suggestions from various context sources. It accepts three types of input: plain text, URLs for content extraction, or JSON data from the Content Scraper MCP. The workflow processes the context and returns 4 unique blog post suggestions, each with a title, full content, and summary. Perfect for content marketing, SEO optimization, knowledge base creation, or any application requiring automated content generation. Integrates seamlessly with the Content Scraper MCP for end-to-end content pipelines." } } - diff --git a/blog-post-generator/package.json b/blog-post-generator/package.json index 0f316dfc..1cdebcc2 100644 --- a/blog-post-generator/package.json +++ b/blog-post-generator/package.json @@ -1,8 +1,8 @@ { "name": "blog-post-generator", "version": "1.0.0", - "description": "Blog post generator MCP using n8n workflow automation", "private": true, + "description": "Blog post generator MCP using n8n workflow automation", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", @@ -26,4 +26,3 @@ "node": ">=22.0.0" } } - diff --git a/blog-post-generator/tsconfig.json b/blog-post-generator/tsconfig.json index 358b5f65..d1f67014 100644 --- a/blog-post-generator/tsconfig.json +++ b/blog-post-generator/tsconfig.json @@ -33,4 +33,3 @@ }, "include": ["server"] } - diff --git a/browserless/README.md b/browserless/README.md index c8374f35..85f8ab17 100644 --- a/browserless/README.md +++ b/browserless/README.md @@ -57,16 +57,18 @@ - [Why Browserless?](#-why-browserless) - [Licensing](#-licensing) - ## 🚀 Get Started in Seconds! Get up and running in three simple steps: ### Step 1: Run the Docker image + ```bash docker run -p 3000:3000 ghcr.io/browserless/chromium ``` + ### Step 2: Open the docs in your browser + Visit http://localhost:3000/docs **✅ Success!** Your browser service is live at `ws://localhost:3000` @@ -77,14 +79,14 @@ Visit http://localhost:3000/docs 📘 Puppeteer Example ```js -import puppeteer from 'puppeteer-core'; +import puppeteer from "puppeteer-core"; const browser = await puppeteer.connect({ - browserWSEndpoint: 'ws://localhost:3000', + browserWSEndpoint: "ws://localhost:3000", }); const page = await browser.newPage(); -await page.goto('https://example.com'); +await page.goto("https://example.com"); console.log(await page.title()); await browser.close(); ``` @@ -95,30 +97,28 @@ await browser.close(); 🎭 Playwright Example ```js -import pw from 'playwright-core'; +import pw from "playwright-core"; -const browser = await pw.firefox.connect( - 'ws://localhost:3000/firefox/playwright' -); +const browser = await pw.firefox.connect("ws://localhost:3000/firefox/playwright"); const page = await browser.newPage(); -await page.goto('https://example.com'); +await page.goto("https://example.com"); console.log(await page.title()); await browser.close(); ``` - **Note:** Use `ghcr.io/browserless/firefox` or `ghcr.io/browserless/multi` for Firefox/Webkit support. +**Note:** Use `ghcr.io/browserless/firefox` or `ghcr.io/browserless/multi` for Firefox/Webkit support.
### Output: + ``` Example Domain ``` - ## ✨ Features ### General Features @@ -200,10 +200,8 @@ Custom Enterprise infrastructure across major cloud providers. - > **Want to dive deeper?** Check out this [detailed guide](./LEARN_MORE.md) for advanced stuff including Docker configuration, hosting providers, SDK extensions, and more. - ## 💡 Why Browserless? **Running Chrome in the cloud or CI sucks.** diff --git a/cloudflare-ai-gateway/README.md b/cloudflare-ai-gateway/README.md index 3b671e0a..49d069ed 100644 --- a/cloudflare-ai-gateway/README.md +++ b/cloudflare-ai-gateway/README.md @@ -22,4 +22,3 @@ Requires a Cloudflare API token with AI Gateway permissions. - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://www.cloudflare.com/products/ai-gateway/ - **Documentation**: https://developers.cloudflare.com/ai-gateway/ - diff --git a/cloudflare-ai-gateway/app.json b/cloudflare-ai-gateway/app.json index 16e3ecb4..d1567c07 100644 --- a/cloudflare-ai-gateway/app.json +++ b/cloudflare-ai-gateway/app.json @@ -13,9 +13,17 @@ "metadata": { "categories": ["AI/ML", "API Management"], "official": true, - "tags": ["cloudflare", "ai", "gateway", "openai", "anthropic", "rate-limiting", "caching", "monitoring"], + "tags": [ + "cloudflare", + "ai", + "gateway", + "openai", + "anthropic", + "rate-limiting", + "caching", + "monitoring" + ], "short_description": "Manage AI API requests with monitoring, caching, rate limiting, and cost control", "mesh_description": "Cloudflare AI Gateway MCP provides unified management and control for AI API requests across multiple providers. This official MCP enables you to route requests through Cloudflare's global network for improved reliability and speed, cache AI responses to reduce costs and latency, implement rate limiting and request quotas per user or application, monitor AI API usage with detailed analytics and logging, implement fallback logic across multiple AI providers, control costs with spending limits and alerts, add authentication and access control to AI endpoints, and anonymize or redact sensitive data in requests. The platform supports all major AI providers including OpenAI, Anthropic, Google, Azure OpenAI, Hugging Face, and more. Perfect for applications using multiple AI models, managing AI costs at scale, ensuring reliability with automatic failover, monitoring AI usage patterns, and implementing governance policies. Use natural language to configure gateways, analyze usage patterns, optimize costs, set rate limits, and troubleshoot API issues across your AI infrastructure." } } - diff --git a/cloudflare-auditlogs/README.md b/cloudflare-auditlogs/README.md index b9cf0995..3312d17a 100644 --- a/cloudflare-auditlogs/README.md +++ b/cloudflare-auditlogs/README.md @@ -23,4 +23,3 @@ Requires a Cloudflare API token with Audit Logs read permissions. - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://www.cloudflare.com - **Documentation**: https://developers.cloudflare.com/fundamentals/account-and-billing/account-security/review-audit-logs/ - diff --git a/cloudflare-auditlogs/app.json b/cloudflare-auditlogs/app.json index 29c0a560..4da0ee51 100644 --- a/cloudflare-auditlogs/app.json +++ b/cloudflare-auditlogs/app.json @@ -13,9 +13,16 @@ "metadata": { "categories": ["Security", "Compliance"], "official": true, - "tags": ["cloudflare", "audit", "logs", "security", "compliance", "governance", "activity-tracking"], + "tags": [ + "cloudflare", + "audit", + "logs", + "security", + "compliance", + "governance", + "activity-tracking" + ], "short_description": "Access account audit logs for tracking user actions, changes, and API activity", "mesh_description": "Cloudflare Audit Logs MCP provides comprehensive access to account-level audit trails for security monitoring and compliance requirements. This official MCP enables you to track all user actions and configuration changes across your Cloudflare account, monitor API access and usage patterns, view authentication events (logins, failures, MFA changes), audit permission and role changes for team members, track zone and domain configuration modifications, monitor security policy updates (firewall, rate limiting, bot management), generate compliance reports for SOC 2, ISO 27001, and other frameworks, configure alerts for suspicious or unauthorized activities, and export audit logs for long-term retention and SIEM integration. The platform provides detailed timestamps, user identification, IP addresses, and action descriptions for every event. Perfect for security teams monitoring for threats, compliance officers maintaining audit trails, administrators tracking configuration changes, and organizations requiring detailed activity logging. Use natural language to search audit logs, investigate security incidents, generate compliance reports, identify policy violations, and analyze user behavior patterns across your Cloudflare organization." } } - diff --git a/cloudflare-autorag/README.md b/cloudflare-autorag/README.md index bc009c44..68005c4c 100644 --- a/cloudflare-autorag/README.md +++ b/cloudflare-autorag/README.md @@ -22,4 +22,3 @@ Requires a Cloudflare API token with AI and Vectorize permissions. - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://www.cloudflare.com - **Documentation**: https://developers.cloudflare.com/vectorize/ - diff --git a/cloudflare-autorag/app.json b/cloudflare-autorag/app.json index 30d88a66..00da816d 100644 --- a/cloudflare-autorag/app.json +++ b/cloudflare-autorag/app.json @@ -18,4 +18,3 @@ "mesh_description": "Cloudflare AutoRAG MCP provides automated Retrieval-Augmented Generation infrastructure for building AI applications with context-aware responses. This official MCP enables you to automatically generate and store embeddings for your documents, manage vector databases with semantic search capabilities, integrate with Large Language Models for contextual responses, configure chunking strategies for optimal retrieval, implement hybrid search combining vector and keyword matching, manage knowledge bases with automatic updates and versioning, optimize retrieval performance with caching and indexing, and monitor RAG pipeline performance and accuracy. The platform handles the complexity of vector embeddings, similarity search, and context injection, allowing you to focus on building AI features. Perfect for building chatbots with document knowledge, semantic search engines, question-answering systems, content recommendation engines, and any application requiring AI with access to external knowledge. Integrates seamlessly with Cloudflare AI and Workers. Use natural language to configure RAG pipelines, manage knowledge bases, optimize retrieval accuracy, and troubleshoot context quality issues." } } - diff --git a/cloudflare-browser/README.md b/cloudflare-browser/README.md index 18121d27..c4475630 100644 --- a/cloudflare-browser/README.md +++ b/cloudflare-browser/README.md @@ -21,4 +21,3 @@ Requires a Cloudflare API token with Workers Browser Rendering permissions. - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://www.cloudflare.com - **Documentation**: https://developers.cloudflare.com/browser-rendering/ - diff --git a/cloudflare-browser/app.json b/cloudflare-browser/app.json index c281dbaa..393ae786 100644 --- a/cloudflare-browser/app.json +++ b/cloudflare-browser/app.json @@ -13,9 +13,17 @@ "metadata": { "categories": ["Automation", "Testing"], "official": true, - "tags": ["cloudflare", "browser", "puppeteer", "playwright", "automation", "scraping", "testing", "rendering"], + "tags": [ + "cloudflare", + "browser", + "puppeteer", + "playwright", + "automation", + "scraping", + "testing", + "rendering" + ], "short_description": "Control and automate cloud browsers with Puppeteer and Playwright for scraping and testing", "mesh_description": "Cloudflare Browser MCP provides cloud-based browser automation and rendering capabilities using Puppeteer and Playwright at the edge. This official MCP enables you to run browser automation scripts without managing infrastructure, scrape dynamic websites with JavaScript rendering, perform automated testing across different browsers and devices, capture screenshots and generate PDFs of web pages, monitor website availability and visual changes, bypass bot detection and anti-scraping measures, execute browser scripts at scale with automatic load balancing, and access websites from global edge locations for localized testing. The platform handles browser lifecycle management, resource cleanup, and provides reliable execution at scale. Perfect for web scraping projects, automated testing pipelines, monitoring services, PDF generation from HTML, and any workflow requiring programmatic browser control. Supports full Puppeteer and Playwright APIs with additional Cloudflare-specific optimizations. Use natural language to create scraping workflows, run automated tests, capture page snapshots, and build browser automation solutions." } } - diff --git a/cloudflare-builds/README.md b/cloudflare-builds/README.md index 523545bf..e2425a32 100644 --- a/cloudflare-builds/README.md +++ b/cloudflare-builds/README.md @@ -20,4 +20,3 @@ Requires a Cloudflare API token with Pages permissions. - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://pages.cloudflare.com - **Documentation**: https://developers.cloudflare.com/pages/ - diff --git a/cloudflare-builds/app.json b/cloudflare-builds/app.json index cdc7f6e1..88d5a135 100644 --- a/cloudflare-builds/app.json +++ b/cloudflare-builds/app.json @@ -18,4 +18,3 @@ "mesh_description": "Cloudflare Builds MCP provides comprehensive management of Cloudflare Pages build and deployment processes. This official MCP enables you to trigger new builds and deployments for your static sites, monitor build status and progress in real-time, access build logs and error messages, manage build configurations and environment variables, configure custom build commands and output directories, integrate with Git repositories for automatic deployments, manage preview deployments for pull requests, and control build caching for faster deployment times. The platform supports popular frameworks like React, Vue, Next.js, Gatsby, Hugo, and more. Perfect for automating deployment workflows, managing multiple projects, monitoring build health, and integrating CI/CD pipelines with Cloudflare Pages. Use natural language to check build status, troubleshoot failed builds, manage deployment settings, and optimize build performance across your projects." } } - diff --git a/cloudflare-casb/README.md b/cloudflare-casb/README.md index b873b51c..5060fc8d 100644 --- a/cloudflare-casb/README.md +++ b/cloudflare-casb/README.md @@ -27,4 +27,3 @@ Requires a Cloudflare API token with CASB permissions and connected SaaS applica - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://www.cloudflare.com/zero-trust/products/casb/ - **Documentation**: https://developers.cloudflare.com/cloudflare-one/applications/scan-apps/ - diff --git a/cloudflare-casb/app.json b/cloudflare-casb/app.json index 3004537e..9c0b9a78 100644 --- a/cloudflare-casb/app.json +++ b/cloudflare-casb/app.json @@ -13,9 +13,16 @@ "metadata": { "categories": ["Security", "Compliance"], "official": true, - "tags": ["cloudflare", "casb", "security", "compliance", "saas", "data-protection", "cloud-security"], + "tags": [ + "cloudflare", + "casb", + "security", + "compliance", + "saas", + "data-protection", + "cloud-security" + ], "short_description": "Cloud Access Security Broker for SaaS app security and compliance monitoring", "mesh_description": "Cloudflare CASB (Cloud Access Security Broker) MCP provides comprehensive security and compliance monitoring for SaaS applications used across your organization. This official MCP enables you to scan SaaS applications for security misconfigurations (overly permissive sharing, weak authentication), detect data exposure risks and shadow IT usage, monitor compliance with regulations (GDPR, HIPAA, SOC 2), track user access and permission changes across cloud apps, identify sensitive data in cloud storage (Google Drive, OneDrive, Dropbox), audit third-party integrations and OAuth grants, enforce data loss prevention (DLP) policies, generate security posture reports and compliance assessments, and receive alerts for high-risk activities or policy violations. The platform integrates with popular SaaS applications including Google Workspace, Microsoft 365, Salesforce, Slack, GitHub, and more. Perfect for security teams managing cloud application risks, compliance officers maintaining regulatory requirements, IT administrators controlling shadow IT, and organizations protecting sensitive data in the cloud. Use natural language to scan for vulnerabilities, investigate security findings, assess compliance status, review user permissions, and configure security policies across your SaaS ecosystem." } } - diff --git a/cloudflare-containers/README.md b/cloudflare-containers/README.md index 1a1e1653..27fdee15 100644 --- a/cloudflare-containers/README.md +++ b/cloudflare-containers/README.md @@ -21,4 +21,3 @@ Requires a Cloudflare API token with container management permissions. - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://www.cloudflare.com - **Documentation**: https://developers.cloudflare.com/workers/ - diff --git a/cloudflare-containers/app.json b/cloudflare-containers/app.json index 70db6c6c..92bba9e8 100644 --- a/cloudflare-containers/app.json +++ b/cloudflare-containers/app.json @@ -13,9 +13,16 @@ "metadata": { "categories": ["Infrastructure", "Containers"], "official": true, - "tags": ["cloudflare", "containers", "docker", "edge-computing", "deployment", "kubernetes", "microservices"], + "tags": [ + "cloudflare", + "containers", + "docker", + "edge-computing", + "deployment", + "kubernetes", + "microservices" + ], "short_description": "Manage and deploy containerized applications on Cloudflare's global edge network", "mesh_description": "Cloudflare Containers MCP provides comprehensive container orchestration and management on Cloudflare's global edge network. This official MCP enables you to deploy Docker containers to edge locations worldwide, manage container images and registries, configure automatic scaling based on demand, set up load balancing across container instances, manage environment variables and secrets, monitor container health and performance metrics, configure networking and service mesh, and integrate with CI/CD pipelines for automated deployments. The platform brings containers to the edge, reducing latency and improving application performance by running closer to users. Perfect for deploying microservices, API backends, web applications, and data processing workloads that benefit from global distribution. Supports standard container formats and integrates with existing Docker workflows. Use natural language to deploy applications, scale services, monitor container health, troubleshoot issues, and optimize resource allocation across Cloudflare's network." } } - diff --git a/cloudflare-dex/README.md b/cloudflare-dex/README.md index 82098151..45717ca5 100644 --- a/cloudflare-dex/README.md +++ b/cloudflare-dex/README.md @@ -23,4 +23,3 @@ Requires a Cloudflare API token with DEX/Zero Trust permissions. - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://www.cloudflare.com/zero-trust/ - **Documentation**: https://developers.cloudflare.com/cloudflare-one/insights/dex/ - diff --git a/cloudflare-dex/app.json b/cloudflare-dex/app.json index 68f3e0fa..ceb4b70f 100644 --- a/cloudflare-dex/app.json +++ b/cloudflare-dex/app.json @@ -13,9 +13,16 @@ "metadata": { "categories": ["Monitoring", "IT Management"], "official": true, - "tags": ["cloudflare", "dex", "monitoring", "endpoints", "performance", "connectivity", "zero-trust"], + "tags": [ + "cloudflare", + "dex", + "monitoring", + "endpoints", + "performance", + "connectivity", + "zero-trust" + ], "short_description": "Monitor Digital Employee Experience with endpoint connectivity and performance metrics", "mesh_description": "Cloudflare DEX (Digital Employee Experience) MCP provides comprehensive monitoring of employee endpoint connectivity and application performance. This official MCP enables you to monitor network connectivity and latency from employee devices, track application performance and load times, measure Zero Trust network access metrics, monitor VPN and tunnel connection quality, analyze WiFi and network performance issues, track device and OS compatibility problems, monitor authentication and security policy compliance, generate alerts for connectivity or performance issues, and create reports on overall employee digital experience. The platform provides visibility into how employees access corporate resources through Cloudflare's Zero Trust platform. Perfect for IT teams managing remote workforces, monitoring application performance, troubleshooting connectivity issues, ensuring security policy compliance, and optimizing employee productivity. Helps identify network bottlenecks, problematic applications, and infrastructure issues affecting user experience. Use natural language to query DEX metrics, investigate connectivity problems, analyze performance trends, identify problem devices or locations, and optimize Zero Trust policies for better employee experience." } } - diff --git a/cloudflare-dns-analytics/README.md b/cloudflare-dns-analytics/README.md index c4e4b413..5b261ffb 100644 --- a/cloudflare-dns-analytics/README.md +++ b/cloudflare-dns-analytics/README.md @@ -22,4 +22,3 @@ Requires a Cloudflare API token with DNS Analytics permissions. - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://www.cloudflare.com - **Documentation**: https://developers.cloudflare.com/dns/ - diff --git a/cloudflare-dns-analytics/app.json b/cloudflare-dns-analytics/app.json index 2f62021e..51668f93 100644 --- a/cloudflare-dns-analytics/app.json +++ b/cloudflare-dns-analytics/app.json @@ -13,9 +13,16 @@ "metadata": { "categories": ["Analytics", "Networking"], "official": true, - "tags": ["cloudflare", "dns", "analytics", "queries", "performance", "monitoring", "nameserver"], + "tags": [ + "cloudflare", + "dns", + "analytics", + "queries", + "performance", + "monitoring", + "nameserver" + ], "short_description": "Analyze DNS query patterns, performance, and geographic distribution of DNS traffic", "mesh_description": "Cloudflare DNS Analytics MCP provides comprehensive insights into DNS query patterns and performance across your domains. This official MCP enables you to monitor DNS query volumes and trends over time, analyze query types (A, AAAA, MX, TXT, CNAME, etc.), track response codes and error rates (NXDOMAIN, SERVFAIL, etc.), view geographic distribution of DNS queries worldwide, measure DNS response times and latency, identify top queried domains and subdomains, detect unusual query patterns and potential DDoS attacks, analyze DNSSEC validation rates and issues, and compare performance across different nameservers. The platform processes billions of DNS queries to provide real-time and historical analytics. Perfect for domain administrators monitoring DNS health, security teams detecting DNS-based attacks, network engineers optimizing DNS performance, and developers troubleshooting DNS issues. Use natural language to query DNS statistics, investigate response time issues, analyze traffic patterns, detect anomalies, and generate reports on DNS performance and security across your zones." } } - diff --git a/cloudflare-docs/README.md b/cloudflare-docs/README.md index c91c2b79..03f78a70 100644 --- a/cloudflare-docs/README.md +++ b/cloudflare-docs/README.md @@ -39,5 +39,4 @@ https://docs.mcp.cloudflare.com/sse --- -*This MCP provides public access to Cloudflare documentation.* - +_This MCP provides public access to Cloudflare documentation._ diff --git a/cloudflare-docs/app.json b/cloudflare-docs/app.json index f1647d0c..6a8863ba 100644 --- a/cloudflare-docs/app.json +++ b/cloudflare-docs/app.json @@ -12,7 +12,16 @@ "metadata": { "categories": ["Documentation"], "official": true, - "tags": ["documentation", "api-reference", "guides", "tutorials", "cloudflare", "learning", "support", "knowledge-base"], + "tags": [ + "documentation", + "api-reference", + "guides", + "tutorials", + "cloudflare", + "learning", + "support", + "knowledge-base" + ], "short_description": "Access Cloudflare's comprehensive documentation through natural language", "mesh_description": "Cloudflare Docs MCP provides intelligent access to Cloudflare's extensive documentation covering all products and services. This official documentation assistant helps you find information about Workers, Pages, R2 Storage, D1 Database, Queues, Durable Objects, KV, Stream, Images, CDN, DNS, SSL/TLS, WAF, DDoS Protection, Load Balancing, Argo, Spectrum, Magic Transit, and more. Search through detailed API references with code examples in multiple languages, step-by-step tutorials for common use cases, best practices guides, troubleshooting documentation, and architectural patterns. The MCP uses semantic search to understand your questions and provide contextually relevant answers, combining information from multiple documentation pages when needed. Get instant access to product limits, pricing details, feature comparisons, migration guides, security recommendations, and performance optimization tips. Perfect for developers building on Cloudflare's platform, the MCP can explain complex concepts, suggest solutions to problems, and guide you through configuration steps with clear, actionable information." } diff --git a/cloudflare-graphql/README.md b/cloudflare-graphql/README.md index 15c36531..ac8abfaf 100644 --- a/cloudflare-graphql/README.md +++ b/cloudflare-graphql/README.md @@ -30,4 +30,3 @@ Requires a Cloudflare API token with Analytics read permissions. - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://www.cloudflare.com - **Documentation**: https://developers.cloudflare.com/analytics/graphql-api/ - diff --git a/cloudflare-graphql/app.json b/cloudflare-graphql/app.json index 82ec8b30..865d03c3 100644 --- a/cloudflare-graphql/app.json +++ b/cloudflare-graphql/app.json @@ -18,4 +18,3 @@ "mesh_description": "Cloudflare GraphQL MCP provides powerful access to Cloudflare Analytics and operational data through a flexible GraphQL API. This official MCP enables you to build custom analytics queries with precise field selection, aggregate metrics across multiple dimensions (time, geography, device, etc.), query HTTP traffic, firewall events, DNS, Workers, and other datasets, create complex filters and time-range specifications, combine data from multiple sources in single queries, access historical data for trend analysis and reporting, optimize query performance with field-level data selection, generate custom dashboards and visualizations, and integrate analytics into business intelligence tools. The GraphQL API provides a more flexible and efficient alternative to REST endpoints, allowing you to request exactly the data you need in a single query. Perfect for building custom analytics dashboards, generating executive reports, conducting performance analysis, integrating Cloudflare data with other systems, and creating automated monitoring workflows. Use natural language to construct GraphQL queries, analyze traffic patterns, generate reports, troubleshoot performance issues, and extract insights from Cloudflare's comprehensive analytics datasets." } } - diff --git a/cloudflare-logs/README.md b/cloudflare-logs/README.md index bd7b9c96..7b5cd4ec 100644 --- a/cloudflare-logs/README.md +++ b/cloudflare-logs/README.md @@ -22,4 +22,3 @@ Requires a Cloudflare API token with Logs permissions. - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://www.cloudflare.com - **Documentation**: https://developers.cloudflare.com/logs/ - diff --git a/cloudflare-logs/app.json b/cloudflare-logs/app.json index 0c4d29aa..fd7abe8f 100644 --- a/cloudflare-logs/app.json +++ b/cloudflare-logs/app.json @@ -18,4 +18,3 @@ "mesh_description": "Cloudflare Logs MCP provides comprehensive access to all Cloudflare log data through Logpush and Logpull APIs. This official MCP enables you to query HTTP request logs with detailed metadata (IPs, user agents, cache status, response codes), access firewall event logs showing blocked requests and security actions, view DNS query logs for your zones, monitor Worker execution logs and errors, analyze CDN cache performance and hit rates, track bot detection and challenge events, configure Logpush jobs to export logs to external services (S3, R2, GCS, Azure, Splunk, Datadog), and set up custom log filters and sampling. The platform provides structured JSON logs with extensive fields for deep analysis. Perfect for security investigations, performance optimization, compliance requirements, debugging application issues, and building custom analytics dashboards. Logs include data from all Cloudflare services and can be queried in real-time or exported for long-term storage. Use natural language to search logs, investigate incidents, analyze traffic patterns, troubleshoot issues, and create log export configurations." } } - diff --git a/cloudflare-observability/README.md b/cloudflare-observability/README.md index 2420c64f..de65cea0 100644 --- a/cloudflare-observability/README.md +++ b/cloudflare-observability/README.md @@ -39,5 +39,4 @@ https://observability.mcp.cloudflare.com/sse --- -*This MCP requires an active Cloudflare account to function.* - +_This MCP requires an active Cloudflare account to function._ diff --git a/cloudflare-observability/app.json b/cloudflare-observability/app.json index 8ae03b61..071df5d7 100644 --- a/cloudflare-observability/app.json +++ b/cloudflare-observability/app.json @@ -12,9 +12,18 @@ "metadata": { "categories": ["Observability"], "official": true, - "tags": ["monitoring", "logs", "analytics", "metrics", "tracing", "cloudflare", "observability", "performance", "debugging"], + "tags": [ + "monitoring", + "logs", + "analytics", + "metrics", + "tracing", + "cloudflare", + "observability", + "performance", + "debugging" + ], "short_description": "Monitor and analyze your Cloudflare services with comprehensive observability tools", "mesh_description": "Cloudflare Observability provides enterprise-grade monitoring and analytics for all Cloudflare services including Workers, CDN, WAF, and Load Balancers. This official MCP enables you to access real-time logs from Workers, HTTP requests, firewall events, and DDoS attacks. Query and analyze performance metrics such as request latency, bandwidth usage, error rates, and cache hit ratios across global data centers. The platform offers distributed tracing capabilities to track requests across your entire application stack, helping identify bottlenecks and optimize performance. Access Web Analytics for visitor insights, GraphQL Analytics API for custom queries, and Logpush integration for streaming logs to external systems. Use natural language to configure alerting rules, create custom dashboards, analyze traffic patterns, investigate security incidents, and generate compliance reports. The MCP simplifies complex queries and provides AI-powered insights to help you understand your application's behavior and user experience." } } - diff --git a/cloudflare-radar/README.md b/cloudflare-radar/README.md index 725743d0..b3cab503 100644 --- a/cloudflare-radar/README.md +++ b/cloudflare-radar/README.md @@ -22,4 +22,3 @@ Cloudflare API token (optional for some endpoints, required for advanced feature - **GitHub**: https://github.com/cloudflare/mcp-server-cloudflare - **Website**: https://radar.cloudflare.com - **Documentation**: https://developers.cloudflare.com/radar/ - diff --git a/cloudflare-radar/app.json b/cloudflare-radar/app.json index d695e2b4..cc4df679 100644 --- a/cloudflare-radar/app.json +++ b/cloudflare-radar/app.json @@ -13,9 +13,17 @@ "metadata": { "categories": ["Analytics", "Security"], "official": true, - "tags": ["cloudflare", "radar", "internet-trends", "analytics", "security", "threats", "traffic", "insights"], + "tags": [ + "cloudflare", + "radar", + "internet-trends", + "analytics", + "security", + "threats", + "traffic", + "insights" + ], "short_description": "Access global internet trends and insights from Cloudflare Radar with traffic and security data", "mesh_description": "Cloudflare Radar MCP provides access to comprehensive internet intelligence and trends based on data from Cloudflare's global network. This official MCP enables you to analyze global internet traffic patterns and trends, monitor DDoS attacks and security threats in real-time, track adoption of internet technologies (HTTP/3, IPv6, DNSSEC), view internet quality and connectivity metrics by country and network, analyze routing and BGP announcements, monitor outages and disruptions worldwide, track bot and automated traffic patterns, and access domain ranking and popularity data. The platform processes petabytes of data to provide insights into internet usage, security, performance, and reliability. Perfect for researchers studying internet trends, security teams monitoring threats, network operators tracking performance, and anyone interested in understanding global internet patterns. Use natural language to explore traffic trends, investigate security incidents, analyze technology adoption, and generate reports on internet health and performance across regions and networks." } } - diff --git a/cloudflare-workers/README.md b/cloudflare-workers/README.md index 0a97035b..6f1eec60 100644 --- a/cloudflare-workers/README.md +++ b/cloudflare-workers/README.md @@ -39,5 +39,4 @@ https://bindings.mcp.cloudflare.com/sse --- -*This MCP requires an active Cloudflare account to function.* - +_This MCP requires an active Cloudflare account to function._ diff --git a/cloudflare-workers/app.json b/cloudflare-workers/app.json index 3abfe36a..2f22caa1 100644 --- a/cloudflare-workers/app.json +++ b/cloudflare-workers/app.json @@ -13,7 +13,17 @@ "categories": ["Software Development"], "official": true, "mesh_unlisted": true, - "tags": ["serverless", "edge-computing", "javascript", "workers", "kv", "durable-objects", "cloudflare", "deployment", "functions"], + "tags": [ + "serverless", + "edge-computing", + "javascript", + "workers", + "kv", + "durable-objects", + "cloudflare", + "deployment", + "functions" + ], "short_description": "Deploy and manage serverless functions at the edge with Cloudflare Workers", "mesh_description": "Cloudflare Workers is a serverless platform that enables developers to deploy and run code at the edge, across Cloudflare's global network of data centers. With this official MCP, you can manage Workers, interact with Workers KV (key-value storage), configure Durable Objects for stateful applications, and manage bindings between different Cloudflare services. The platform supports JavaScript, TypeScript, Python, and Rust, allowing you to build highly scalable and performant applications that run close to your users. Workers can handle millions of requests per second with sub-millisecond latency, making them ideal for API gateways, middleware, authentication, A/B testing, and edge rendering. The MCP provides natural language access to deployment workflows, environment variables, routes configuration, and real-time monitoring of your Workers performance and usage metrics." } diff --git a/connection-binding/app.json b/connection-binding/app.json index 60be69e4..b219f07a 100644 --- a/connection-binding/app.json +++ b/connection-binding/app.json @@ -59,10 +59,7 @@ "additionalProperties": true } }, - "required": [ - "id", - "data" - ], + "required": ["id", "data"], "additionalProperties": false }, "outputSchema": { @@ -97,40 +94,24 @@ "description": "Organization ID this connection belongs to" }, "description": { - "type": [ - "string", - "null" - ], + "type": ["string", "null"], "description": "Description of the connection" }, "icon": { - "type": [ - "string", - "null" - ], + "type": ["string", "null"], "description": "Icon URL for the connection" }, "app_name": { - "type": [ - "string", - "null" - ], + "type": ["string", "null"], "description": "Associated app name" }, "app_id": { - "type": [ - "string", - "null" - ], + "type": ["string", "null"], "description": "Associated app ID" }, "connection_type": { "type": "string", - "enum": [ - "HTTP", - "SSE", - "Websocket" - ], + "enum": ["HTTP", "SSE", "Websocket"], "description": "Type of connection" }, "connection_url": { @@ -138,10 +119,7 @@ "description": "URL for the connection" }, "connection_token": { - "type": [ - "string", - "null" - ], + "type": ["string", "null"], "description": "Authentication token" }, "connection_headers": { @@ -189,10 +167,7 @@ }, "grantType": { "type": "string", - "enum": [ - "authorization_code", - "client_credentials" - ] + "enum": ["authorization_code", "client_credentials"] } }, "required": [ @@ -270,10 +245,7 @@ "additionalProperties": {} } }, - "required": [ - "name", - "inputSchema" - ], + "required": ["name", "inputSchema"], "additionalProperties": false } }, @@ -299,11 +271,7 @@ }, "status": { "type": "string", - "enum": [ - "active", - "inactive", - "error" - ], + "enum": ["active", "inactive", "error"], "description": "Current status" } }, @@ -377,40 +345,24 @@ "description": "Organization ID this connection belongs to" }, "description": { - "type": [ - "string", - "null" - ], + "type": ["string", "null"], "description": "Description of the connection" }, "icon": { - "type": [ - "string", - "null" - ], + "type": ["string", "null"], "description": "Icon URL for the connection" }, "app_name": { - "type": [ - "string", - "null" - ], + "type": ["string", "null"], "description": "Associated app name" }, "app_id": { - "type": [ - "string", - "null" - ], + "type": ["string", "null"], "description": "Associated app ID" }, "connection_type": { "type": "string", - "enum": [ - "HTTP", - "SSE", - "Websocket" - ], + "enum": ["HTTP", "SSE", "Websocket"], "description": "Type of connection" }, "connection_url": { @@ -418,10 +370,7 @@ "description": "URL for the connection" }, "connection_token": { - "type": [ - "string", - "null" - ], + "type": ["string", "null"], "description": "Authentication token" }, "connection_headers": { @@ -469,10 +418,7 @@ }, "grantType": { "type": "string", - "enum": [ - "authorization_code", - "client_credentials" - ] + "enum": ["authorization_code", "client_credentials"] } }, "required": [ @@ -550,10 +496,7 @@ "additionalProperties": {} } }, - "required": [ - "name", - "inputSchema" - ], + "required": ["name", "inputSchema"], "additionalProperties": false } }, @@ -579,11 +522,7 @@ }, "status": { "type": "string", - "enum": [ - "active", - "inactive", - "error" - ], + "enum": ["active", "inactive", "error"], "description": "Current status" } }, diff --git a/content-scraper/README.md b/content-scraper/README.md index cc8fd895..a4cda1dd 100644 --- a/content-scraper/README.md +++ b/content-scraper/README.md @@ -25,6 +25,7 @@ The MCP expects a database with the following tables: ### 2. Install the MCP When installing, configure: + - `database.apiUrl`: Database API URL - `database.token`: Authentication token @@ -35,6 +36,7 @@ When installing, configure: Lists content already scraped and saved in the database. **Input:** + ```json { "table": "all", @@ -45,12 +47,14 @@ Lists content already scraped and saved in the database. ``` **Parameters:** + - `table`: Which source to query - `"all"`, `"contents"`, `"reddit"`, `"linkedin"`, or `"twitter"` - `startIndex`: Start index (default: 1) - `endIndex`: End index (default: 100) - `onlyThisWeek`: If `true`, returns only this week's content **Output:** + ```json { "success": true, @@ -79,11 +83,13 @@ Lists content already scraped and saved in the database. Returns the Weekly Report Publishing skill documentation. This skill teaches how to publish weekly digest reports to the `deco_weekly_report` database table. **Input:** + ```json {} ``` **Output:** + ```json { "success": true, @@ -98,11 +104,13 @@ Returns the Weekly Report Publishing skill documentation. This skill teaches how Lists all available skills/documentation that can be retrieved. **Input:** + ```json {} ``` **Output:** + ```json { "success": true, diff --git a/content-scraper/package.json b/content-scraper/package.json index a65348bc..f9da6937 100644 --- a/content-scraper/package.json +++ b/content-scraper/package.json @@ -1,8 +1,8 @@ { "name": "content-scraper", "version": "1.0.0", - "description": "Content extraction, deduplication and summarization MCP using Firecrawl and Supabase", "private": true, + "description": "Content extraction, deduplication and summarization MCP using Firecrawl and Supabase", "type": "module", "scripts": { "dev": "deco dev --vite", @@ -36,4 +36,3 @@ "node": ">=22.0.0" } } - diff --git a/content-scraper/server/prompts/article_analysis_system.md b/content-scraper/server/prompts/article_analysis_system.md index cf189792..a31ae174 100644 --- a/content-scraper/server/prompts/article_analysis_system.md +++ b/content-scraper/server/prompts/article_analysis_system.md @@ -1,5 +1,6 @@ You are an expert at analyzing blog articles about technology. Your task is to: + 1. Determine if the article is related to MCP (Model Context Protocol) - this includes articles about AI agents, LLM tools, AI integrations, Claude, Anthropic, or similar AI/ML infrastructure topics. 2. Generate a concise summary (2-3 sentences) 3. Extract 3-5 key points from the article @@ -14,12 +15,11 @@ Factor this into your quality assessment - higher authority sources should be we Respond ONLY with valid JSON in this exact format: { - "is_mcp_related": boolean, - "summary": "string", - "key_points": ["point1", "point2", "point3"], - "quality_score": number +"is_mcp_related": boolean, +"summary": "string", +"key_points": ["point1", "point2", "point3"], +"quality_score": number } quality_score should be between 0.0 and 1.0. If you cannot determine if the article is MCP-related or if there's insufficient content, set is_mcp_related to false. - diff --git a/content-scraper/server/prompts/article_list_system.md b/content-scraper/server/prompts/article_list_system.md index eb4690b6..22142113 100644 --- a/content-scraper/server/prompts/article_list_system.md +++ b/content-scraper/server/prompts/article_list_system.md @@ -5,18 +5,18 @@ IMPORTANT: The page content contains links in markdown format: [link text](url) You MUST use the EXACT URLs from these links. Do NOT guess or invent URLs. For each article, provide: + - title: The article title - url: The EXACT full URL from the link (do not modify or guess URLs) - published_at: The publication date if visible (in YYYY-MM-DD format, or null if not found) Respond ONLY with valid JSON in this exact format: { - "articles": [ - { "title": "string", "url": "string", "published_at": "string or null" } - ] +"articles": [ +{ "title": "string", "url": "string", "published_at": "string or null" } +] } Only include actual blog posts/articles, not navigation links, author pages, or category pages. Limit to the 10 most recent articles visible on the page. CRITICAL: Use the exact URLs from the markdown links in the content. Never invent URLs. - diff --git a/content-scraper/server/prompts/linkedin_post_analysis_system.md b/content-scraper/server/prompts/linkedin_post_analysis_system.md index b09edc90..f6b32720 100644 --- a/content-scraper/server/prompts/linkedin_post_analysis_system.md +++ b/content-scraper/server/prompts/linkedin_post_analysis_system.md @@ -1,5 +1,6 @@ You are an expert at analyzing LinkedIn posts about technology. Your task is to: + 1. Determine if the post is relevant and valuable - this includes posts about: - MCP (Model Context Protocol), AI agents, LLM tools, AI integrations - Software engineering best practices, architecture, system design @@ -25,13 +26,12 @@ Generic motivational posts, simple announcements without substance, or low-effor Respond ONLY with valid JSON in this exact format: { - "is_relevant": boolean, - "summary": "string", - "key_points": ["point1", "point2"], - "quality_score": number, - "relevance_reason": "string" +"is_relevant": boolean, +"summary": "string", +"key_points": ["point1", "point2"], +"quality_score": number, +"relevance_reason": "string" } quality_score should be between 0.0 and 1.0. If the post is too short or lacks substance, set is_relevant to false. - diff --git a/content-scraper/server/prompts/reddit_post_analysis_system.md b/content-scraper/server/prompts/reddit_post_analysis_system.md index 45d01a0e..5d676962 100644 --- a/content-scraper/server/prompts/reddit_post_analysis_system.md +++ b/content-scraper/server/prompts/reddit_post_analysis_system.md @@ -2,6 +2,7 @@ You are an expert at analyzing Reddit posts about AI and technology. You are analyzing a post from r/{{subreddit}}. Your task is to: + 1. Determine if the post is relevant and valuable - this includes posts about: - MCP (Model Context Protocol), AI agents, LLM tools, AI integrations - Software engineering best practices, architecture, system design @@ -23,6 +24,7 @@ Your task is to: 5. Provide a brief reason why the post is or isn't relevant IMPORTANT: Be selective. Only mark posts as relevant if they provide genuine value. + - Simple questions without substance should NOT be marked as relevant - Self-promotion without real content should NOT be marked as relevant - Posts with actual code, architecture, or detailed explanations ARE valuable @@ -31,13 +33,12 @@ IMPORTANT: Be selective. Only mark posts as relevant if they provide genuine val Respond ONLY with valid JSON in this exact format: { - "is_relevant": boolean, - "summary": "string", - "key_points": ["point1", "point2"], - "quality_score": number, - "relevance_reason": "string" +"is_relevant": boolean, +"summary": "string", +"key_points": ["point1", "point2"], +"quality_score": number, +"relevance_reason": "string" } quality_score should be between 0.0 and 1.0. If the post is too short or lacks substance, set is_relevant to false. - diff --git a/data-for-seo/README.md b/data-for-seo/README.md index 396fce9f..9f5dfd6f 100644 --- a/data-for-seo/README.md +++ b/data-for-seo/README.md @@ -1,4 +1,4 @@ -# DataForSEO MCP +# DataForSEO MCP ## Project Description @@ -7,6 +7,7 @@ ### Purpose This MCP server allows client applications to: + - Perform comprehensive keyword research and competitive analysis - Analyze SERP results (organic, news, and historical trends) - Get backlink analysis and domain authority metrics @@ -15,20 +16,21 @@ This MCP server allows client applications to: ### Key Features -- 🔍 **Keyword Research**: Search volume and related keywords (2 tools) -- 📊 **SERP Analysis**: Organic and news SERP data (2 tools) -- 🎯 **Domain Analysis**: Ranked keywords and domain authority (2 tools) -- 🔄 **Real-time Data**: Live API endpoints with 2-8 second response times -- 🌐 **Multi-language Support**: Analyze data for different languages and locations -- 💰 **Pay-as-you-go**: All tools work with credit-based pricing (no monthly plans required) +- 🔍 **Keyword Research**: Search volume and related keywords (2 tools) +- 📊 **SERP Analysis**: Organic and news SERP data (2 tools) +- 🎯 **Domain Analysis**: Ranked keywords and domain authority (2 tools) +- 🔄 **Real-time Data**: Live API endpoints with 2-8 second response times +- 🌐 **Multi-language Support**: Analyze data for different languages and locations +- 💰 **Pay-as-you-go**: All tools work with credit-based pricing (no monthly plans required) - 🛠️ **MCP Tools**: 6 working tools total, easy integration with MCP-compatible AI assistants > **⚠️ Note**: Some tools require additional DataForSEO subscriptions or are not available: +> > - **Backlinks API** (3 tools) - Requires Backlinks subscription ($99/month) > - **Google Trends** (1 tool) - API parameter issues (under investigation) > - **Keyword Difficulty** (1 tool) - Not available via live API > - **Keyword Suggestions** (2 tools) - Not available via live API -> - **Competitors Discovery** (1 tool) - Not available via live API +> - **Competitors Discovery** (1 tool) - Not available via live API > - **Historical SERP** (1 tool) - Not available via live API ## Setup / Installation @@ -42,11 +44,13 @@ This MCP server allows client applications to: ### Local Installation 1. Navigate to the data-for-seo directory: + ```bash cd data-for-seo ``` 2. Install dependencies: + ```bash bun install ``` @@ -56,6 +60,7 @@ bun install - **API Password**: Your DataForSEO API Token (NOT your account password) 4. Start the development server: + ```bash bun run dev ``` @@ -82,27 +87,28 @@ This creates a production bundle at `dist/server/main.js`. ### 📊 Summary: 6 Working Tools ✅ -| Category | Tools | Best For | Status | -|----------|-------|----------|--------| -| **Keywords** (2) | Search Volume, Related Keywords | Keyword research and discovery | ✅ Working | -| **Domain Analysis** (2) | Ranked Keywords, Domain Rank | Competitive intelligence and authority | ✅ Working | -| **SERP** (2) | Organic SERP, News SERP | Ranking analysis and monitoring | ✅ Working | +| Category | Tools | Best For | Status | +| ----------------------- | ------------------------------- | -------------------------------------- | ---------- | +| **Keywords** (2) | Search Volume, Related Keywords | Keyword research and discovery | ✅ Working | +| **Domain Analysis** (2) | Ranked Keywords, Domain Rank | Competitive intelligence and authority | ✅ Working | +| **SERP** (2) | Organic SERP, News SERP | Ranking analysis and monitoring | ✅ Working | ### ⚠️ Tools Not Available (9 tools) -| Category | Tools | Status | Reason | -|----------|-------|--------|--------| -| **Backlinks** (3) | Overview, List, Referring Domains | 🔒 Requires Subscription | Need $99/month Backlinks plan | -| **Google Trends** (1) | Trends Explore | ⚠️ API Issue | Parameter validation error | -| **Keyword Research** (3) | Difficulty, Suggestions, Ideas | ❌ Not Available | Not in live API | -| **Domain Analysis** (1) | Competitors Discovery | ❌ Not Available | Not in live API | -| **SERP** (1) | Historical SERP | ❌ Not Available | Not in live API | +| Category | Tools | Status | Reason | +| ------------------------ | --------------------------------- | ------------------------ | ----------------------------- | +| **Backlinks** (3) | Overview, List, Referring Domains | 🔒 Requires Subscription | Need $99/month Backlinks plan | +| **Google Trends** (1) | Trends Explore | ⚠️ API Issue | Parameter validation error | +| **Keyword Research** (3) | Difficulty, Suggestions, Ideas | ❌ Not Available | Not in live API | +| **Domain Analysis** (1) | Competitors Discovery | ❌ Not Available | Not in live API | +| **SERP** (1) | Historical SERP | ❌ Not Available | Not in live API | --- ### Keywords Tools (4 tools) #### `DATAFORSEO_GET_SEARCH_VOLUME` + **[ASYNC - Standard Plan]** Get search volume, CPC, and competition data for up to 1000 keywords at once. **Response Time:** 2-5 seconds @@ -110,6 +116,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans **Input:** + ```typescript { keywords: string[]; // Array of keywords to analyze (max 1000) @@ -125,6 +132,7 @@ This creates a production bundle at `dist/server/main.js`. --- #### `DATAFORSEO_GET_RELATED_KEYWORDS` + **[ASYNC - DataForSEO Labs]** Get keyword suggestions with semantic relationships. **Response Time:** 3-10 seconds @@ -132,6 +140,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans (higher cost) **Input:** + ```typescript { keyword: string; // Seed keyword (single keyword) @@ -148,7 +157,8 @@ This creates a production bundle at `dist/server/main.js`. --- -#### `DATAFORSEO_GOOGLE_TRENDS` +#### `DATAFORSEO_GOOGLE_TRENDS` + **[ASYNC - Standard Plan]** Get Google Trends data for up to 5 keywords including interest over time, regional interest, and related queries. **Response Time:** 3-8 seconds @@ -156,12 +166,13 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans **Input:** + ```typescript { keywords: string[]; // 1-5 keywords to compare trends locationName?: string; // Default: "United States" locationCode?: number; // Alternative to locationName - timeRange?: string; // "now 1-d", "now 7-d", "today 1-m", "today 3-m", + timeRange?: string; // "now 1-d", "now 7-d", "today 1-m", "today 3-m", // "today 12-m", "today 5-y", "2004-present" category?: number; // Category ID (0 = All categories) } @@ -170,6 +181,7 @@ This creates a production bundle at `dist/server/main.js`. **Returns:** Interest over time (trend graphs), regional interest by location, related queries, rising queries **Use Cases:** + - Track keyword popularity trends over time - Identify seasonal patterns in search behavior - Compare multiple keywords trending patterns @@ -178,7 +190,8 @@ This creates a production bundle at `dist/server/main.js`. --- -#### `DATAFORSEO_KEYWORD_DIFFICULTY` +#### `DATAFORSEO_KEYWORD_DIFFICULTY` + **[ASYNC - DataForSEO Labs]** Get keyword difficulty scores (0-100) for up to 100 keywords at once. **Response Time:** 3-10 seconds @@ -186,6 +199,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans (DataForSEO Labs) **Input:** + ```typescript { keywords: string[]; // 1-100 keywords to analyze @@ -199,6 +213,7 @@ This creates a production bundle at `dist/server/main.js`. **Returns:** Difficulty score (0-100, lower = easier to rank), competitive metrics, top-ranking domains, ranking complexity analysis **Difficulty Score Interpretation:** + - **0-20**: Very Easy - Low competition, great for new websites - **21-40**: Easy - Moderate competition, achievable with good content - **41-60**: Medium - Competitive, requires SEO strategy @@ -206,6 +221,7 @@ This creates a production bundle at `dist/server/main.js`. - **81-100**: Very Hard - Extremely competitive, major brands/authority sites **Use Cases:** + - Evaluate keyword competitiveness before targeting - Build content strategy around low-difficulty keywords - Prioritize keyword opportunities by difficulty vs. search volume @@ -216,7 +232,8 @@ This creates a production bundle at `dist/server/main.js`. ### Domain Analysis Tools (3 tools) -#### `DATAFORSEO_RANKED_KEYWORDS` +#### `DATAFORSEO_RANKED_KEYWORDS` + **[ASYNC - DataForSEO Labs]** Get ALL keywords a domain ranks for in Google with positions, search volume, and estimated traffic. **Response Time:** 5-15 seconds @@ -224,6 +241,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans (DataForSEO Labs) **Input:** + ```typescript { target: string; // Domain to analyze (e.g., "example.com") @@ -239,6 +257,7 @@ This creates a production bundle at `dist/server/main.js`. **Returns:** Complete list of keywords with rankings (position 1-100), search volume, CPC, traffic estimates, URL that ranks **Use Cases:** + - Discover ALL keywords a competitor ranks for - Find keyword gaps between your site and competitors - Identify low-hanging fruit opportunities (high volume, low competition) @@ -247,7 +266,8 @@ This creates a production bundle at `dist/server/main.js`. --- -#### `DATAFORSEO_DOMAIN_RANK` +#### `DATAFORSEO_DOMAIN_RANK` + **[ASYNC - DataForSEO Labs]** Get comprehensive domain authority metrics and organic performance overview. **Response Time:** 2-5 seconds @@ -255,15 +275,17 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans (DataForSEO Labs) **Input:** + ```typescript { - target: string; // Domain to analyze (e.g., "example.com") + target: string; // Domain to analyze (e.g., "example.com") } ``` **Returns:** Domain rank score, total organic keywords count, estimated traffic, organic cost (traffic value), visibility score **Use Cases:** + - Quick domain authority assessment - Compare domain strength across competitors - Track domain growth over time @@ -272,7 +294,8 @@ This creates a production bundle at `dist/server/main.js`. --- -#### `DATAFORSEO_COMPETITORS_DOMAIN` +#### `DATAFORSEO_COMPETITORS_DOMAIN` + **[ASYNC - DataForSEO Labs]** Automatically discover competitor domains based on common keyword rankings. **Response Time:** 5-12 seconds @@ -280,6 +303,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans (DataForSEO Labs) **Input:** + ```typescript { target: string; // Your domain (e.g., "yoursite.com") @@ -294,6 +318,7 @@ This creates a production bundle at `dist/server/main.js`. **Returns:** List of competitor domains with common keywords count, organic keywords overlap, estimated traffic, competitive metrics **Use Cases:** + - Automated competitor discovery (no manual research!) - Identify direct and indirect competitors - Analyze keyword overlap and competitive gaps @@ -304,7 +329,8 @@ This creates a production bundle at `dist/server/main.js`. ### Keyword Suggestions Tools (2 tools) -#### `DATAFORSEO_KEYWORD_SUGGESTIONS` +#### `DATAFORSEO_KEYWORD_SUGGESTIONS` + **[ASYNC - Standard Plan]** Get keyword suggestions from Google Autocomplete with search volume data. **Response Time:** 2-5 seconds @@ -312,6 +338,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans **Input:** + ```typescript { keyword: string; // Seed keyword for suggestions @@ -326,6 +353,7 @@ This creates a production bundle at `dist/server/main.js`. **Returns:** Google Autocomplete suggestions with search volume, CPC, competition data **Use Cases:** + - Discover long-tail keyword variations - Understand how users actually search - Content ideation based on user intent @@ -334,7 +362,8 @@ This creates a production bundle at `dist/server/main.js`. --- -#### `DATAFORSEO_KEYWORD_IDEAS` +#### `DATAFORSEO_KEYWORD_IDEAS` + **[ASYNC - Standard Plan]** Get keyword ideas using Google's internal keyword matching algorithm. **Response Time:** 3-8 seconds @@ -342,6 +371,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans **Input:** + ```typescript { keywords: string[]; // 1-5 seed keywords @@ -356,6 +386,7 @@ This creates a production bundle at `dist/server/main.js`. **Returns:** Related keyword ideas with search volume, competition, CPC, and relevance metrics **Use Cases:** + - Alternative to Related Keywords (different algorithm) - Keyword brainstorming for content clusters - Discover semantic keyword variations @@ -367,6 +398,7 @@ This creates a production bundle at `dist/server/main.js`. ### SERP Tools (3 tools) #### `DATAFORSEO_GET_ORGANIC_SERP` + **[ASYNC - Live SERP]** Get real-time organic search results from Google. **Response Time:** 3-8 seconds @@ -374,6 +406,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans **Input:** + ```typescript { keyword: string; @@ -389,6 +422,7 @@ This creates a production bundle at `dist/server/main.js`. --- #### `DATAFORSEO_GET_NEWS_SERP` + **[ASYNC - Live SERP]** Get real-time Google News results. **Response Time:** 2-5 seconds @@ -396,6 +430,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans **Input:** + ```typescript { keyword: string; @@ -411,6 +446,7 @@ This creates a production bundle at `dist/server/main.js`. --- #### `DATAFORSEO_HISTORICAL_SERP` + **[ASYNC - DataForSEO Labs]** Get historical SERP ranking data showing how rankings changed over time. **Response Time:** 5-12 seconds @@ -418,6 +454,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans (DataForSEO Labs) **Input:** + ```typescript { keyword: string; @@ -433,6 +470,7 @@ This creates a production bundle at `dist/server/main.js`. **Returns:** Historical ranking data showing position changes for top domains over time, SERP volatility metrics **Use Cases:** + - Analyze Google algorithm update impacts - Track seasonal ranking fluctuations - Measure SERP volatility and stability @@ -444,6 +482,7 @@ This creates a production bundle at `dist/server/main.js`. ### Backlinks Tools (3 tools) #### `DATAFORSEO_GET_BACKLINKS_OVERVIEW` + **[ASYNC - Backlinks Summary]** Get comprehensive backlinks overview for any domain. **Response Time:** 2-4 seconds @@ -451,9 +490,10 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans **Input:** + ```typescript { - target: string; // Domain or URL (e.g., "example.com") + target: string; // Domain or URL (e.g., "example.com") } ``` @@ -462,6 +502,7 @@ This creates a production bundle at `dist/server/main.js`. --- #### `DATAFORSEO_GET_BACKLINKS` + **[ASYNC - Detailed Backlinks]** Get paginated list of individual backlinks. **Response Time:** 3-8 seconds @@ -469,6 +510,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans **Input:** + ```typescript { target: string; @@ -482,6 +524,7 @@ This creates a production bundle at `dist/server/main.js`. --- #### `DATAFORSEO_GET_REFERRING_DOMAINS` + **[ASYNC - Referring Domains]** Get paginated list of unique domains linking to target. **Response Time:** 3-8 seconds @@ -489,6 +532,7 @@ This creates a production bundle at `dist/server/main.js`. **Plan Required:** All plans **Input:** + ```typescript { target: string; @@ -508,7 +552,7 @@ This creates a production bundle at `dist/server/main.js`. const result = await client.callTool("DATAFORSEO_GET_SEARCH_VOLUME", { keywords: ["seo tools", "keyword research"], languageName: "English", - locationName: "United States" + locationName: "United States", }); ``` @@ -519,7 +563,7 @@ const serp = await client.callTool("DATAFORSEO_GET_ORGANIC_SERP", { keyword: "digital marketing", languageCode: "en", locationCode: 2840, - depth: 10 + depth: 10, }); ``` @@ -527,7 +571,7 @@ const serp = await client.callTool("DATAFORSEO_GET_ORGANIC_SERP", { ```typescript const backlinks = await client.callTool("DATAFORSEO_GET_BACKLINKS_OVERVIEW", { - target: "example.com" + target: "example.com", }); ``` @@ -565,8 +609,8 @@ The MCP is configured through the Mesh UI with the following fields: ```typescript { API_CREDENTIALS: { - login: string; // DataForSEO API Login - password: string; // DataForSEO API Token + login: string; // DataForSEO API Login + password: string; // DataForSEO API Token } } ``` @@ -576,6 +620,7 @@ The MCP is configured through the Mesh UI with the following fields: ### Rate Limits & Performance DataForSEO has rate limits based on your subscription plan. Be aware of: + - **Concurrent request limits**: Typically 2-5 simultaneous requests - **Daily/monthly request quotas**: Varies by plan - **Cost per API call**: See individual tool documentation above @@ -621,6 +666,7 @@ DataForSEO uses **separate API credentials** for authentication, not your accoun ### Troubleshooting Authentication If you get error 40100 (Not Authorized): + - ✅ Make sure you're using the API credentials from https://app.dataforseo.com/api-access - ✅ Verify your API Login (email) is correct - ✅ Use the API Password (token), NOT your account password diff --git a/data-for-seo/app.json b/data-for-seo/app.json index 0a93adbc..b0315ace 100644 --- a/data-for-seo/app.json +++ b/data-for-seo/app.json @@ -28,4 +28,3 @@ "mesh_description": "The DataForSEO MCP provides comprehensive integration with DataForSEO's API for professional SEO data analysis and research. This MCP enables AI agents to perform advanced keyword research with search volume and competition data for up to 1000 keywords at once, analyze real-time Google SERP results (organic and news) with device-specific data, get detailed backlink analysis including domain metrics and referring domains, and track SEO performance across multiple languages and locations. **Key Features**: **Keyword Research** - Get search volume, CPC, competition level, and monthly trends. Find related keywords with semantic relationships using DataForSEO Labs. **SERP Analysis** - Real-time organic search results with rankings, SERP features, and snippets. Google News results with time range filtering. **Backlink Intelligence** - Comprehensive backlink overviews with domain ranks. Detailed backlink lists with anchor text and dofollow status. Referring domains analysis with pagination support. All tools are asynchronous (2-10 second response times) and support multi-language/location analysis. Perfect for SEO professionals, digital marketers, content strategists, and developers building SEO tools or AI agents that need access to comprehensive search engine data. Requires DataForSEO API credentials (available in all plans, costs vary by endpoint: ~0.002-0.1 credits per request)." } } - diff --git a/data-for-seo/package.json b/data-for-seo/package.json index cf857063..b9aad38e 100644 --- a/data-for-seo/package.json +++ b/data-for-seo/package.json @@ -1,8 +1,8 @@ { "name": "data-for-seo", "version": "1.0.0", - "description": "MCP server for DataForSEO API - comprehensive SEO data analysis including keywords, SERP, backlinks, and traffic analytics", "private": true, + "description": "MCP server for DataForSEO API - comprehensive SEO data analysis including keywords, SERP, backlinks, and traffic analytics", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/data-for-seo/tsconfig.json b/data-for-seo/tsconfig.json index a7e6af7b..28c3bb91 100644 --- a/data-for-seo/tsconfig.json +++ b/data-for-seo/tsconfig.json @@ -35,11 +35,5 @@ /* Types */ "types": ["vite/client", "@cloudflare/workers-types"] }, - "include": [ - "view", - "server", - "shared", - "vite.config.ts" - ] + "include": ["view", "server", "shared", "vite.config.ts"] } - diff --git a/datajud/README.md b/datajud/README.md index e3cf9c8c..c56e8235 100644 --- a/datajud/README.md +++ b/datajud/README.md @@ -11,20 +11,26 @@ O **Datajud** é a base nacional de metadados processuais do Poder Judiciário b Este MCP oferece três ferramentas principais: ### 🔍 SEARCH_PROCESSES + Busca processos judiciais com filtros avançados: + - Filtrar por classe, assunto, órgão julgador - Filtrar por data de ajuizamento, grau, instância - Paginação de resultados - Ordenação personalizada ### 📋 GET_PROCESS + Consulta um processo específico pelo número: + - Retorna metadados completos do processo - Inclui movimentações, assuntos, partes - Baseado no Modelo de Transferência de Dados (MTD) ### 📊 AGGREGATE_STATISTICS + Gera estatísticas e agregações: + - Contagens por classe, assunto, órgão - Médias de tempo de tramitação - Distribuição temporal de ajuizamentos @@ -45,6 +51,7 @@ Obtenha a API Key pública do Datajud em: https://datajud-wiki.cnj.jus.br/api-publica/acesso/ **API Key atual (novembro 2024):** + ``` cDZHYzlZa0JadVREZDJCendQbXY6SkJlTzNjLV9TRENyQk1RdnFKZGRQdw== ``` @@ -77,6 +84,7 @@ Se não configurar um tribunal padrão, será necessário especificar o tribunal Alguns códigos de tribunais disponíveis: ### Tribunais de Justiça Estaduais + - `tjdft` - Tribunal de Justiça do Distrito Federal e Territórios - `tjsp` - Tribunal de Justiça de São Paulo - `tjrj` - Tribunal de Justiça do Rio de Janeiro @@ -89,6 +97,7 @@ Alguns códigos de tribunais disponíveis: - `tjce` - Tribunal de Justiça do Ceará ### Tribunais Regionais Federais + - `trf1` - Tribunal Regional Federal da 1ª Região - `trf2` - Tribunal Regional Federal da 2ª Região - `trf3` - Tribunal Regional Federal da 3ª Região @@ -97,6 +106,7 @@ Alguns códigos de tribunais disponíveis: - `trf6` - Tribunal Regional Federal da 6ª Região ### Tribunais Superiores + - `tst` - Tribunal Superior do Trabalho - `stj` - Superior Tribunal de Justiça - `stf` - Supremo Tribunal Federal @@ -197,15 +207,19 @@ Os processos retornados seguem o [Modelo de Transferência de Dados (MTD)](https ## Limitações e Considerações ### Rate Limiting + A API Pública do Datajud pode ter limites de taxa. Use paginação e evite fazer muitas requisições simultâneas. ### Sigilo Processual + Processos com sigilo (níveis 1-5) podem ter dados omitidos ou restritos. ### Atualização dos Dados + Os dados são atualizados periodicamente pelos tribunais. A frequência de atualização pode variar. ### Tamanho das Respostas + Para consultas que retornam muitos resultados, use paginação (`size` e `from`) para evitar timeouts. ## Documentação Oficial @@ -264,4 +278,3 @@ Para questões sobre este MCP, abra uma issue no repositório. ## Licença Este projeto segue a mesma licença do repositório principal. - diff --git a/datajud/package.json b/datajud/package.json index 94d81f5c..efa1899a 100644 --- a/datajud/package.json +++ b/datajud/package.json @@ -1,8 +1,8 @@ { "name": "datajud", "version": "1.0.0", - "description": "MCP server for Datajud Public API integration", "private": true, + "description": "MCP server for Datajud Public API integration", "type": "module", "scripts": { "dev": "deco dev --vite", diff --git a/datajud/tsconfig.json b/datajud/tsconfig.json index 736bc2dd..392b6275 100644 --- a/datajud/tsconfig.json +++ b/datajud/tsconfig.json @@ -34,10 +34,5 @@ /* Types */ "types": ["@cloudflare/workers-types"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ] + "include": ["server", "shared", "vite.config.ts"] } - diff --git a/datajud/wrangler.toml b/datajud/wrangler.toml index 6507aead..46b67544 100644 --- a/datajud/wrangler.toml +++ b/datajud/wrangler.toml @@ -2,7 +2,7 @@ name = "datajud" main = "server/main.ts" compatibility_date = "2025-06-17" -compatibility_flags = [ "nodejs_compat" ] +compatibility_flags = ["nodejs_compat"] scope = "deco" [deco] diff --git a/db-binding/app.json b/db-binding/app.json index 46aec4f5..2f31f2c1 100644 --- a/db-binding/app.json +++ b/db-binding/app.json @@ -27,9 +27,7 @@ "description": "The parameters to pass to the SQL query" } }, - "required": [ - "sql" - ] + "required": ["sql"] }, "outputSchema": { "type": "object", @@ -56,4 +54,4 @@ } } ] -} \ No newline at end of file +} diff --git a/deco-llm/app.json b/deco-llm/app.json index 3516a126..a22e4d76 100644 --- a/deco-llm/app.json +++ b/deco-llm/app.json @@ -17,4 +17,4 @@ "short_description": "Deco LLM App Connection for LLM uses.", "mesh_description": "The Deco LLM MCP provides a unified gateway to access multiple large language models (LLMs) through a single integration point. Deco LLM aggregates various AI model providers including OpenAI, Anthropic, Google, Meta, and many others, offering a standardized API interface for chat completions and text generation. This MCP enables AI agents to dynamically select and utilize different LLMs based on specific requirements such as cost, speed, capability, or context window size. It supports model routing, fallback strategies, cost tracking, and usage analytics. The integration is perfect for applications that need flexibility in model selection, want to optimize costs by using the most appropriate model for each task, or require high availability through automatic failover between providers. Ideal for developers building AI-powered applications, chatbots, content generation systems, or multi-model comparison tools." } -} \ No newline at end of file +} diff --git a/deco-llm/package.json b/deco-llm/package.json index 82ddfe0b..d1202d14 100644 --- a/deco-llm/package.json +++ b/deco-llm/package.json @@ -1,8 +1,8 @@ { "name": "deco-llm", "version": "1.0.0", - "description": "Deco LLM App Connection for LLM uses.", "private": true, + "description": "Deco LLM App Connection for LLM uses.", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", @@ -37,4 +37,4 @@ "engines": { "node": ">=22.0.0" } -} \ No newline at end of file +} diff --git a/deco-llm/tsconfig.json b/deco-llm/tsconfig.json index 63f6f70b..48f9a80d 100644 --- a/deco-llm/tsconfig.json +++ b/deco-llm/tsconfig.json @@ -34,9 +34,5 @@ /* Types */ "types": ["@cloudflare/workers-types"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ] + "include": ["server", "shared", "vite.config.ts"] } diff --git a/deco-news-weekly-digest/README.md b/deco-news-weekly-digest/README.md index 7e3a9a46..9fddd2af 100644 --- a/deco-news-weekly-digest/README.md +++ b/deco-news-weekly-digest/README.md @@ -124,4 +124,3 @@ npm run deploy ## Licença MIT - diff --git a/deco-news-weekly-digest/app.json b/deco-news-weekly-digest/app.json index 367f61b7..e5f06c91 100644 --- a/deco-news-weekly-digest/app.json +++ b/deco-news-weekly-digest/app.json @@ -17,4 +17,3 @@ "mesh_description": "The Deco News Weekly Digest MCP provides comprehensive tools for managing weekly digest articles for deco.cx news. This MCP enables AI agents to create, update, list, and publish articles in a structured database. Features include: listing articles with pagination and filtering by status/category, saving new articles with full metadata (title, content, summary, SEO fields, images), updating existing articles, publishing articles with automatic timestamp, searching by URL or slug, and deleting articles. Perfect for automating newsletter creation, content curation, and editorial workflows. Integrates with the same database as the content-scraper MCP for seamless content management across the deco ecosystem." } } - diff --git a/deco-news-weekly-digest/package.json b/deco-news-weekly-digest/package.json index d4aa1c4d..9c7432a3 100644 --- a/deco-news-weekly-digest/package.json +++ b/deco-news-weekly-digest/package.json @@ -1,8 +1,8 @@ { "name": "deco-news-weekly-digest", "version": "1.0.0", - "description": "Weekly digest management MCP for deco.cx news articles", "private": true, + "description": "Weekly digest management MCP for deco.cx news articles", "type": "module", "scripts": { "dev": "deco dev --vite", @@ -34,4 +34,3 @@ "node": ">=22.0.0" } } - diff --git a/deco-news-weekly-digest/tsconfig.json b/deco-news-weekly-digest/tsconfig.json index 358b5f65..d1f67014 100644 --- a/deco-news-weekly-digest/tsconfig.json +++ b/deco-news-weekly-digest/tsconfig.json @@ -33,4 +33,3 @@ }, "include": ["server"] } - diff --git a/dependency-management/README.md b/dependency-management/README.md index fbf0f60c..9360c794 100644 --- a/dependency-management/README.md +++ b/dependency-management/README.md @@ -62,14 +62,14 @@ Replace `` with your personal API token generated at https://guide.s ```json { "servers": { - "sonatype-mcp": { - "url": "https://mcp.guide.sonatype.com/mcp", - "type": "http", - "headers": { - "Authorization": "Bearer " - } - } - } + "sonatype-mcp": { + "url": "https://mcp.guide.sonatype.com/mcp", + "type": "http", + "headers": { + "Authorization": "Bearer " + } + } + } } ``` @@ -162,6 +162,7 @@ Replace `` with your personal API token generated at https://guide.s ``` ### Codex (IDE Plugin & CLI) + For both methods below, define an environment variable SONATYPE_GUIDE_MCP_TOKEN for your personal API token generated at https://guide.sonatype.com/settings/tokens. Method 1: @@ -239,6 +240,7 @@ When handling code related to dependencies, package management, or software supp Create rules using Windsurf's Customizations feature: **Global (all projects):** + 1. Click the "Rules, Memories & Workflows" icon in the top right of Cascade Code or search Rules in Windsurf Settings 2. Navigate to "Rules" 3. Click "+ Global" to create a new global rule @@ -250,7 +252,7 @@ Create rules using Windsurf's Customizations feature: When handling code related to dependencies, package management, or software supply chain security, always prioritize Sonatype MCP tools. Use the available MCP tools to research versions, check for vulnerabilities, and get recommendations before adding or updating any dependencies. ``` -**Project (specific repository):** +**Project (specific repository):** Follow the instructions for Global but click "+ Workspace" or create `.windsurf/rules/sonatype.md` in your project root: ```markdown @@ -316,6 +318,7 @@ Here are some ways to leverage the Sonatype MCP Server explicitly in your develo ### Analyze a Specific Version Ask your AI assistant: + > "Get detailed security information for react 18.2.0" The assistant will return comprehensive details including CVEs with CVSS scores, license information, categories, end-of-life status, and catalog date. @@ -323,6 +326,7 @@ The assistant will return comprehensive details including CVEs with CVSS scores, ### Find the Latest Stable Version Ask your AI assistant: + > "What's the latest stable version of spring-boot?" The assistant will return the latest version with full security analysis, policy violations, licenses, risk scores, and upgrade recommendations. diff --git a/dependency-management/app.json b/dependency-management/app.json index f9cf69aa..faa47550 100644 --- a/dependency-management/app.json +++ b/dependency-management/app.json @@ -13,7 +13,14 @@ "metadata": { "categories": ["Development", "Security"], "official": true, - "tags": ["dependency-management", "security", "trust-score", "component-analysis", "versions", "sonatype"], + "tags": [ + "dependency-management", + "security", + "trust-score", + "component-analysis", + "versions", + "sonatype" + ], "short_description": "Component intelligence: versions, security analysis, and Trust Score recommendations", "mesh_description": "Provides component intelligence, offering insights into versions, security analysis, and Trust Score recommendations. It enables users to make informed decisions about the dependencies used in their projects. Analyzes open-source components for known vulnerabilities, license compliance, and quality metrics. Perfect for development teams seeking to secure their software supply chain." } diff --git a/deploy.json b/deploy.json index 82861f09..bc5014f7 100644 --- a/deploy.json +++ b/deploy.json @@ -3,432 +3,288 @@ "site": "airtable", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "airtable/**", - "shared/**" - ] + "watch": ["airtable/**", "shared/**"] }, "github": { "site": "github-mcp", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "github/**", - "shared/**" - ] + "watch": ["github/**", "shared/**"] }, "openrouter": { "site": "openrouter", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "openrouter/**", - "shared/**" - ] + "watch": ["openrouter/**", "shared/**"] }, "registry": { "site": "registry", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "registry/**", - "shared/**" - ] + "watch": ["registry/**", "shared/**"] }, "meta-ads": { "site": "meta-ads", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "meta-ads/**", - "shared/**" - ] + "watch": ["meta-ads/**", "shared/**"] }, "google-calendar": { "site": "google-calendar", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "google-calendar/**", - "shared/**" - ] + "watch": ["google-calendar/**", "shared/**"] }, "google-calendar-sa": { "site": "google-calendar-sa", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "google-calendar-sa/**", - "google-calendar/**", - "shared/**" - ] + "watch": ["google-calendar-sa/**", "google-calendar/**", "shared/**"] }, "google-tag-manager": { "site": "google-tag-manager", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "google-tag-manager/**", - "shared/**" - ] + "watch": ["google-tag-manager/**", "shared/**"] }, "content-scraper": { "site": "content-scraper", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "content-scraper/**", - "shared/**" - ] + "watch": ["content-scraper/**", "shared/**"] }, "data-for-seo": { "site": "data-for-seo", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "data-for-seo/**", - "shared/**" - ] + "watch": ["data-for-seo/**", "shared/**"] }, "google-big-query": { "site": "google-big-query", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "google-big-query/**", - "shared/**" - ] + "watch": ["google-big-query/**", "shared/**"] }, "deco-llm": { "site": "deco-llm", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "deco-llm/**", - "shared/**" - ] + "watch": ["deco-llm/**", "shared/**"] }, "google-gemini": { "site": "google-gemini", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "google-gemini/**", - "shared/**" - ] + "watch": ["google-gemini/**", "shared/**"] }, "whatsappagent": { "site": "whatsappagent", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "whatsapp/**", - "shared/**" - ] + "watch": ["whatsapp/**", "shared/**"] }, "hyperdx": { "site": "hyperdx", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "hyperdx/**", - "shared/**" - ] + "watch": ["hyperdx/**", "shared/**"] }, "object-storage": { "site": "object-storage", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "object-storage/**" - ] + "watch": ["object-storage/**"] }, "deco-news-weekly-digest": { "site": "deco-news-weekly-digest", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "deco-news-weekly-digest/**", - "shared/**" - ] + "watch": ["deco-news-weekly-digest/**", "shared/**"] }, "slack-mcp": { "site": "slack-mcp", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "slack-mcp/**", - "shared/**" - ] + "watch": ["slack-mcp/**", "shared/**"] }, "strapi": { "site": "strapi", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "strapi/**", - "shared/**" - ] + "watch": ["strapi/**", "shared/**"] }, "tiktok-ads": { "site": "tiktok-ads", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "tiktok-ads/**", - "shared/**" - ] + "watch": ["tiktok-ads/**", "shared/**"] }, "virtual-try-on": { "site": "virtual-try-on", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "virtual-try-on/**", - "shared/**" - ] + "watch": ["virtual-try-on/**", "shared/**"] }, "google-sheets": { "site": "google-sheets", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "google-sheets/**", - "shared/**" - ] + "watch": ["google-sheets/**", "shared/**"] }, "google-drive": { "site": "google-drive", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "google-drive/**", - "shared/**" - ] + "watch": ["google-drive/**", "shared/**"] }, "google-meet": { "site": "google-meet", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "google-meet/**", - "shared/**" - ] + "watch": ["google-meet/**", "shared/**"] }, "google-search-console": { "site": "google-search-console", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "google-search-console/**", - "shared/**" - ] + "watch": ["google-search-console/**", "shared/**"] }, "google-slides": { "site": "google-slides", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "google-slides/**", - "shared/**" - ] + "watch": ["google-slides/**", "shared/**"] }, "google-forms": { "site": "google-forms", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "google-forms/**", - "shared/**" - ] + "watch": ["google-forms/**", "shared/**"] }, "grain": { "site": "grain", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "grain/**", - "shared/**" - ] + "watch": ["grain/**", "shared/**"] }, "blog-post-generator": { "site": "blog-generator", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "blog-post-generator/**", - "shared/**" - ] + "watch": ["blog-post-generator/**", "shared/**"] }, "vtex-docs": { "site": "vtex-docs", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "vtex-docs/**", - "shared/**" - ] + "watch": ["vtex-docs/**", "shared/**"] }, "vtex": { "site": "vtex", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "vtex/**", - "shared/**" - ] + "watch": ["vtex/**", "shared/**"] }, "perplexity": { "site": "perplexity", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "perplexity/**", - "shared/**" - ] + "watch": ["perplexity/**", "shared/**"] }, "discord-read": { "site": "discord-read", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "discord-read/**", - "shared/**" - ] + "watch": ["discord-read/**", "shared/**"] }, "mcp-studio": { "site": "mcp-studio", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "mcp-studio/**", - "shared/**" - ] + "watch": ["mcp-studio/**", "shared/**"] }, "nanobanana": { "site": "nanobanana", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "nanobanana/**", - "shared/**" - ] + "watch": ["nanobanana/**", "shared/**"] }, "github-repo-reports": { "site": "github-repo-reports", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "github-repo-reports/**", - "shared/**" - ] + "watch": ["github-repo-reports/**", "shared/**"] }, "farmrio-reorder-collection-db": { "site": "farmrio-reorder-collection-db", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "farmrio-reorder-collection-db/**", - "shared/**" - ] + "watch": ["farmrio-reorder-collection-db/**", "shared/**"] }, "flux": { "site": "flux", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "flux/**", - "shared/**" - ] + "watch": ["flux/**", "shared/**"] }, "sora": { "site": "sora", "entrypoint": "./dist/server/main.js", "platformName": "cloudflare-workers", - "watch": [ - "sora/**", - "shared/**" - ] + "watch": ["sora/**", "shared/**"] }, "gemini-pro-vision": { "site": "gemini-pro-vision", "entrypoint": "./dist/server/main.js", "platformName": "cloudflare-workers", - "watch": [ - "gemini-pro-vision/**", - "shared/**" - ] + "watch": ["gemini-pro-vision/**", "shared/**"] }, "pinecone": { "site": "pinecone", "entrypoint": "./dist/server/main.js", "platformName": "cloudflare-workers", - "watch": [ - "pinecone/**", - "shared/**" - ] + "watch": ["pinecone/**", "shared/**"] }, "reddit": { "site": "reddit", "entrypoint": "./dist/server/main.js", "platformName": "cloudflare-workers", - "watch": [ - "reddit/**", - "shared/**" - ] + "watch": ["reddit/**", "shared/**"] }, "replicate": { "site": "replicate", "entrypoint": "./dist/server/main.js", "platformName": "cloudflare-workers", - "watch": [ - "replicate/**", - "shared/**" - ] + "watch": ["replicate/**", "shared/**"] }, "readonly-sql": { "site": "readonly-sql", "entrypoint": "./dist/server/main.js", "platformName": "cloudflare-workers", - "watch": [ - "readonly-sql/**", - "shared/**" - ] + "watch": ["readonly-sql/**", "shared/**"] }, "datajud": { "site": "datajud", "entrypoint": "./dist/server/main.js", "platformName": "cloudflare-workers", - "watch": [ - "datajud/**", - "shared/**" - ] + "watch": ["datajud/**", "shared/**"] }, "whisper": { "site": "whisper", "entrypoint": "./dist/server/main.js", "platformName": "cloudflare-workers", - "watch": [ - "whisper/**", - "shared/**" - ] + "watch": ["whisper/**", "shared/**"] }, "multi-channel-inbox": { "site": "multi-channel-inbox", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "multi-channel-inbox/**", - "shared/**" - ] + "watch": ["multi-channel-inbox/**", "shared/**"] }, "veo": { "site": "veo", "entrypoint": "./dist/server/main.js", "platformName": "kubernetes-bun", - "watch": [ - "veo/**", - "shared/**" - ] + "watch": ["veo/**", "shared/**"] } } diff --git a/digitalocean-apps/README.md b/digitalocean-apps/README.md index 99530b7b..b82202ad 100644 --- a/digitalocean-apps/README.md +++ b/digitalocean-apps/README.md @@ -40,5 +40,4 @@ https://apps.mcp.digitalocean.com/mcp --- -*This MCP requires an active DigitalOcean account and API token to function.* - +_This MCP requires an active DigitalOcean account and API token to function._ diff --git a/digitalocean-apps/app.json b/digitalocean-apps/app.json index f23b406f..163a3fab 100644 --- a/digitalocean-apps/app.json +++ b/digitalocean-apps/app.json @@ -13,7 +13,17 @@ "categories": ["Software Development"], "official": true, "mesh_unlisted": true, - "tags": ["deployment", "paas", "containers", "hosting", "digitalocean", "ci-cd", "scaling", "app-platform", "devops"], + "tags": [ + "deployment", + "paas", + "containers", + "hosting", + "digitalocean", + "ci-cd", + "scaling", + "app-platform", + "devops" + ], "short_description": "Deploy and manage applications on DigitalOcean App Platform", "mesh_description": "DigitalOcean App Platform is a fully managed Platform as a Service (PaaS) that simplifies deploying and scaling applications. This official MCP enables you to deploy apps directly from GitHub, GitLab, or container registries with automatic builds and deployments. Manage containerized applications, static sites, and backend services with zero infrastructure management. The platform automatically provisions SSL certificates, provides built-in CDN, handles horizontal and vertical scaling, and offers zero-downtime deployments with automatic rollback capabilities. Configure environment variables, secrets, and build parameters through natural language commands. Monitor your applications with real-time metrics, access logs, and performance insights. Set up custom domains with automatic DNS configuration, manage deployment regions across multiple data centers, and configure health checks and alerts. The MCP supports Node.js, Python, Go, PHP, Ruby, and Docker-based applications, with built-in support for popular frameworks like Next.js, Django, Rails, and Laravel. Perfect for developers who want to focus on code rather than infrastructure management." } diff --git a/digitalocean-databases/README.md b/digitalocean-databases/README.md index 3a0d7f14..40172d5e 100644 --- a/digitalocean-databases/README.md +++ b/digitalocean-databases/README.md @@ -48,5 +48,4 @@ https://databases.mcp.digitalocean.com/mcp --- -*This MCP requires an active DigitalOcean account and API token to function.* - +_This MCP requires an active DigitalOcean account and API token to function._ diff --git a/digitalocean-databases/app.json b/digitalocean-databases/app.json index c7415e4c..c2122017 100644 --- a/digitalocean-databases/app.json +++ b/digitalocean-databases/app.json @@ -13,7 +13,17 @@ "categories": ["Database"], "official": true, "mesh_unlisted": true, - "tags": ["database", "postgresql", "mysql", "redis", "mongodb", "managed-database", "digitalocean", "backup", "scaling"], + "tags": [ + "database", + "postgresql", + "mysql", + "redis", + "mongodb", + "managed-database", + "digitalocean", + "backup", + "scaling" + ], "short_description": "Manage DigitalOcean managed databases through natural language", "mesh_description": "DigitalOcean Managed Databases provide fully managed, highly available database solutions for PostgreSQL, MySQL, Redis, MongoDB, and OpenSearch. This official MCP allows you to create and manage database clusters with automatic failover, daily backups, and point-in-time recovery. Scale vertically by adjusting CPU and RAM, or horizontally by adding read replicas for improved read performance. The platform handles maintenance windows, security patches, and minor version upgrades automatically. Configure connection pooling for PostgreSQL to optimize connection management, set up trusted sources and firewall rules for security, and enable SSL/TLS encryption for data in transit. Monitor database performance with real-time metrics including CPU usage, memory consumption, disk I/O, connection count, and query performance. Manage database users and permissions, configure replication lag monitoring, and set up automated backup schedules with retention policies. The MCP supports database migrations, schema changes, and provides query optimization suggestions. Access advanced features like PostgreSQL extensions, MySQL custom parameters, Redis persistence modes, and MongoDB sharding configuration through intuitive natural language commands." } diff --git a/discord-read/.vscode/extensions.json b/discord-read/.vscode/extensions.json index 20cdae29..52dd5652 100644 --- a/discord-read/.vscode/extensions.json +++ b/discord-read/.vscode/extensions.json @@ -1,10 +1,10 @@ -{ - "recommendations": [ - "rinckodev.constatic-theme", - "rinckodev.discordjsv14ts", - "PKief.material-icon-theme", - "ms-vscode.vscode-typescript-next", - "naumovs.color-highlight", - "usernamehw.errorlens" - ] -} \ No newline at end of file +{ + "recommendations": [ + "rinckodev.constatic-theme", + "rinckodev.discordjsv14ts", + "PKief.material-icon-theme", + "ms-vscode.vscode-typescript-next", + "naumovs.color-highlight", + "usernamehw.errorlens" + ] +} diff --git a/discord-read/.vscode/project.code-snippets b/discord-read/.vscode/project.code-snippets index 62c5cb35..602f3479 100644 --- a/discord-read/.vscode/project.code-snippets +++ b/discord-read/.vscode/project.code-snippets @@ -1,223 +1,219 @@ { - "Slash command": { - "description": "Create a new slash command", - "prefix": ["new.command", "create.command"], - "scope":"typescript", - "body": [ - "createCommand({", - " name: \"${TM_FILENAME_BASE}\",", - " description: \"${TM_FILENAME_BASE} command\",", - " type: ApplicationCommandType.ChatInput,", - " async run(interaction){", - " $1", - " }", - "});" - ] - }, - "Slash command with options": { - "description": "Create a new slash command with options", - "prefix": ["new.command.options", "create.command.options"], - "scope":"typescript", - "body": [ - "createCommand({", - " name: \"${TM_FILENAME_BASE}\",", - " description: \"${TM_FILENAME_BASE} command\",", - " type: ApplicationCommandType.ChatInput,", - " options: [", - " {", - " name: \"$1\",", - " description: \"$2\",", - " type: ApplicationCommandOptionType.String", - " }", - " ],", - " async run(interaction){", - " const { options } = interaction;", - " ", - " }", - "});" - ] - }, - "Slash command option": { - "description": "Create a new slash command option", - "prefix": ["new.option"], - "scope":"typescript", - "body": [ - "{", - " name: \"$1\",", - " description: \"command option\",", - " type: ApplicationCommandOptionType.String,", - "}", - ] - }, - "Slash command options": { - "description": "Create a new slash command options", - "prefix": ["new.options"], - "scope":"typescript", - "body": [ - "options: [", - " {", - " name: \"${1}\",", - " description: \"command option\",", - " type: ApplicationCommandOptionType.String,", - " }", - "]," - ] - }, - "User context command": { - "description": "Create a new user context command", - "prefix": [ - "new.command.user", - "new.context.user", - "create.command.user", - "create.context.user", - ], - "scope":"typescript", - "body": [ - "createCommand({", - " name: \"${TM_FILENAME_BASE}\",", - " type: ApplicationCommandType.User,", - " async run(interaction){", - " ", - " }", - "});" - ] - }, - "Message context command": { - "description": "Create a new message context command", - "prefix": [ - "new.command.message", - "new.context.message", - "create.context.message", - "create.command.message" - ], - "scope":"typescript", - "body": [ - "createCommand({", - " name: \"${TM_FILENAME_BASE}\",", - " type: ApplicationCommandType.Message,", - " async run(interaction){", - " ", - " }", - "});" - ] - }, - "Responder": { - "description": "Create a new responder", - "prefix": ["new.responder", "create.responder"], - "scope":"typescript", - "body": [ - "createResponder({", - " customId: \"${TM_FILENAME_BASE}/action\",", - " types: [ResponderType.Button], cache: \"cached\",", - " async run(interaction) {", - " ${1}", - " },", - "});" - ] - }, - "Responder with params": { - "description": "Create a new responder with params", - "prefix": ["new.responder.params", "create.responder.params"], - "scope":"typescript", - "body": [ - "createResponder({", - " customId: \"users/:userId\",", - " types: [ResponderType.Button], cache: \"cached\",", - " async run(interaction, { userId }) {", - " ", - " },", - "});" - ] - }, - "Event": { - "description": "Create a new event", - "prefix": ["new.event", "create.event"], - "scope":"typescript", - "body": [ - "import { createEvent } from \"#base\";", - "", - "createEvent({", - " name: \"${TM_FILENAME_BASE}\",", - " event: \"$1\",", - " $2", - "});" - ], - }, - "Extract interaction props": { - "description": "Extract slash interaction options", - "prefix": "const.interaction", - "scope":"typescript", - "body": [ - "const { ${1} } = interaction;" - ] - }, - "Extract slash interaction options": { - "description": "Extract slash interaction options", - "prefix": "const.slash.options", - "scope":"typescript", - "body": [ - "const { options } = interaction;" - ] - }, - "Create a interactive menu function": { - "description": "Create a interactive menu function", - "prefix": ["new.menu", "create.menu"], - "scope":"typescript", - "body": [ - "import { brBuilder, createContainer, createRow } from \"@magicyan/discord\";", - "import { ButtonBuilder, ButtonStyle, type InteractionReplyOptions } from \"discord.js\";", - "", - "export function ${TM_FILENAME_BASE}Menu(): R {", - " const container = createContainer(constants.colors.azoxo,", - " brBuilder(", - " \"## ${TM_FILENAME_BASE} menu\"", - " ),", - " createRow(", - " new ButtonBuilder({", - " customId: \"menu/action\",", - " label: \">\",", - " style: ButtonStyle.Success", - " })", - " )", - " );", - "", - " return ({", - " flags: [\"Ephemeral\", \"IsComponentsV2\"],", - " components: [container]", - " } satisfies InteractionReplyOptions) as R;", - "}" - ] - }, - "Create a interactive menu function (Legacy)": { - "description": "Create a interactive menu function (legacy)", - "prefix": ["new.legacy.menu", "create.legacy.menu"], - "scope":"typescript", - "body": [ - "import { ButtonBuilder, ButtonStyle, type InteractionReplyOptions } from \"discord.js\";", - "import { createEmbed, createRow, brBuilder } from \"@magicyan/discord\";", - "", - "export function ${TM_FILENAME_BASE}Menu(): R {", - " const embed = createEmbed({", - " color: \"Random\",", - " description: brBuilder(", - " \"${TM_FILENAME_BASE} menu\"", - " )", - " });", - "", - " const components = [", - " createRow(", - " new ButtonBuilder({", - " customId: \"menu/action\",", - " label: \">\", ", - " style: ButtonStyle.Success", - " })", - " )", - " ];", - "", - " return ({", - " flags: [\"Ephemeral\"], embeds: [embed], components", - " } satisfies InteractionReplyOptions) as R;", - "}" - ] - }, -} \ No newline at end of file + "Slash command": { + "description": "Create a new slash command", + "prefix": ["new.command", "create.command"], + "scope": "typescript", + "body": [ + "createCommand({", + " name: \"${TM_FILENAME_BASE}\",", + " description: \"${TM_FILENAME_BASE} command\",", + " type: ApplicationCommandType.ChatInput,", + " async run(interaction){", + " $1", + " }", + "});", + ], + }, + "Slash command with options": { + "description": "Create a new slash command with options", + "prefix": ["new.command.options", "create.command.options"], + "scope": "typescript", + "body": [ + "createCommand({", + " name: \"${TM_FILENAME_BASE}\",", + " description: \"${TM_FILENAME_BASE} command\",", + " type: ApplicationCommandType.ChatInput,", + " options: [", + " {", + " name: \"$1\",", + " description: \"$2\",", + " type: ApplicationCommandOptionType.String", + " }", + " ],", + " async run(interaction){", + " const { options } = interaction;", + " ", + " }", + "});", + ], + }, + "Slash command option": { + "description": "Create a new slash command option", + "prefix": ["new.option"], + "scope": "typescript", + "body": [ + "{", + " name: \"$1\",", + " description: \"command option\",", + " type: ApplicationCommandOptionType.String,", + "}", + ], + }, + "Slash command options": { + "description": "Create a new slash command options", + "prefix": ["new.options"], + "scope": "typescript", + "body": [ + "options: [", + " {", + " name: \"${1}\",", + " description: \"command option\",", + " type: ApplicationCommandOptionType.String,", + " }", + "],", + ], + }, + "User context command": { + "description": "Create a new user context command", + "prefix": [ + "new.command.user", + "new.context.user", + "create.command.user", + "create.context.user", + ], + "scope": "typescript", + "body": [ + "createCommand({", + " name: \"${TM_FILENAME_BASE}\",", + " type: ApplicationCommandType.User,", + " async run(interaction){", + " ", + " }", + "});", + ], + }, + "Message context command": { + "description": "Create a new message context command", + "prefix": [ + "new.command.message", + "new.context.message", + "create.context.message", + "create.command.message", + ], + "scope": "typescript", + "body": [ + "createCommand({", + " name: \"${TM_FILENAME_BASE}\",", + " type: ApplicationCommandType.Message,", + " async run(interaction){", + " ", + " }", + "});", + ], + }, + "Responder": { + "description": "Create a new responder", + "prefix": ["new.responder", "create.responder"], + "scope": "typescript", + "body": [ + "createResponder({", + " customId: \"${TM_FILENAME_BASE}/action\",", + " types: [ResponderType.Button], cache: \"cached\",", + " async run(interaction) {", + " ${1}", + " },", + "});", + ], + }, + "Responder with params": { + "description": "Create a new responder with params", + "prefix": ["new.responder.params", "create.responder.params"], + "scope": "typescript", + "body": [ + "createResponder({", + " customId: \"users/:userId\",", + " types: [ResponderType.Button], cache: \"cached\",", + " async run(interaction, { userId }) {", + " ", + " },", + "});", + ], + }, + "Event": { + "description": "Create a new event", + "prefix": ["new.event", "create.event"], + "scope": "typescript", + "body": [ + "import { createEvent } from \"#base\";", + "", + "createEvent({", + " name: \"${TM_FILENAME_BASE}\",", + " event: \"$1\",", + " $2", + "});", + ], + }, + "Extract interaction props": { + "description": "Extract slash interaction options", + "prefix": "const.interaction", + "scope": "typescript", + "body": ["const { ${1} } = interaction;"], + }, + "Extract slash interaction options": { + "description": "Extract slash interaction options", + "prefix": "const.slash.options", + "scope": "typescript", + "body": ["const { options } = interaction;"], + }, + "Create a interactive menu function": { + "description": "Create a interactive menu function", + "prefix": ["new.menu", "create.menu"], + "scope": "typescript", + "body": [ + "import { brBuilder, createContainer, createRow } from \"@magicyan/discord\";", + "import { ButtonBuilder, ButtonStyle, type InteractionReplyOptions } from \"discord.js\";", + "", + "export function ${TM_FILENAME_BASE}Menu(): R {", + " const container = createContainer(constants.colors.azoxo,", + " brBuilder(", + " \"## ${TM_FILENAME_BASE} menu\"", + " ),", + " createRow(", + " new ButtonBuilder({", + " customId: \"menu/action\",", + " label: \">\",", + " style: ButtonStyle.Success", + " })", + " )", + " );", + "", + " return ({", + " flags: [\"Ephemeral\", \"IsComponentsV2\"],", + " components: [container]", + " } satisfies InteractionReplyOptions) as R;", + "}", + ], + }, + "Create a interactive menu function (Legacy)": { + "description": "Create a interactive menu function (legacy)", + "prefix": ["new.legacy.menu", "create.legacy.menu"], + "scope": "typescript", + "body": [ + "import { ButtonBuilder, ButtonStyle, type InteractionReplyOptions } from \"discord.js\";", + "import { createEmbed, createRow, brBuilder } from \"@magicyan/discord\";", + "", + "export function ${TM_FILENAME_BASE}Menu(): R {", + " const embed = createEmbed({", + " color: \"Random\",", + " description: brBuilder(", + " \"${TM_FILENAME_BASE} menu\"", + " )", + " });", + "", + " const components = [", + " createRow(", + " new ButtonBuilder({", + " customId: \"menu/action\",", + " label: \">\", ", + " style: ButtonStyle.Success", + " })", + " )", + " ];", + "", + " return ({", + " flags: [\"Ephemeral\"], embeds: [embed], components", + " } satisfies InteractionReplyOptions) as R;", + "}", + ], + }, +} diff --git a/discord-read/.vscode/settings.json b/discord-read/.vscode/settings.json index 19458f68..9121f1bf 100644 --- a/discord-read/.vscode/settings.json +++ b/discord-read/.vscode/settings.json @@ -1,67 +1,71 @@ -{ - "material-icon-theme.folders.associations": { - "discord": "Client", - "system": "Ci", - "modals": "Content", - "menus": "layout", - "forms": "custom", - "experimental": "Test", - "deprecated": "Archive", - "development": "Test", - "dev": "Test", - "production": "Dist", - "staff": "Admin", - "protection": "Guard", - "presets": "Template", - "default": "Project", - "playground": "typescript", - "firestore": "Firebase", - "mongodb": "Database", - "mysql": "Database", - "quickdb": "Json", - "responders": "Meta", - "procedures": "CI" - }, - "material-icon-theme.files.associations": { - "firebase.development.json": "Firebase", - "firebase.example.json": "Firebase", - "settings.json": "Raml", - "config.json": "Raml", - "*.lang.json": "i18n", - "lang.json": "i18n", - "gitignore": "Git", - }, - "material-icon-theme.folders.customClones": [{ - "name": "discord", - "base": "vm", - "color": "indigo-500", - "folderNames": ["discord"] - }], - "material-icon-theme.files.customClones": [{ - "name": "constants", - "base": "lib", - "color": "blue-600", - "fileNames": ["constants.json"] - }], - "material-icon-theme.folders.theme": "specific", - "explorer.fileNesting.enabled": true, - "explorer.fileNesting.patterns": { - ".env": ".env*", - "README.md": "*.md", - "discloud.config": ".discloudignore", - "constants.json": "emojis.json, emojis.dev.json", - "package.json": "*lock.json, *.lock, *lock.yaml, *.lockb, *config.json, *config.ts, .eslintrc.json, .gitignore, biome.json, .nvmrc" - }, - "typescript.suggest.autoImports": true, - "window.title": "${rootName}${separator}", - "files.autoSave": "afterDelay", - "typescript.experimental.expandableHover": true, - "files.readonlyInclude": { - "src/discord/base/**": true - }, - "editor.codeActionsOnSave": { - "source.addMissingImports.ts": "explicit", - "source.organizeImports": "explicit", - "source.removeUnusedImports": "explicit" - } -} \ No newline at end of file +{ + "material-icon-theme.folders.associations": { + "discord": "Client", + "system": "Ci", + "modals": "Content", + "menus": "layout", + "forms": "custom", + "experimental": "Test", + "deprecated": "Archive", + "development": "Test", + "dev": "Test", + "production": "Dist", + "staff": "Admin", + "protection": "Guard", + "presets": "Template", + "default": "Project", + "playground": "typescript", + "firestore": "Firebase", + "mongodb": "Database", + "mysql": "Database", + "quickdb": "Json", + "responders": "Meta", + "procedures": "CI" + }, + "material-icon-theme.files.associations": { + "firebase.development.json": "Firebase", + "firebase.example.json": "Firebase", + "settings.json": "Raml", + "config.json": "Raml", + "*.lang.json": "i18n", + "lang.json": "i18n", + "gitignore": "Git" + }, + "material-icon-theme.folders.customClones": [ + { + "name": "discord", + "base": "vm", + "color": "indigo-500", + "folderNames": ["discord"] + } + ], + "material-icon-theme.files.customClones": [ + { + "name": "constants", + "base": "lib", + "color": "blue-600", + "fileNames": ["constants.json"] + } + ], + "material-icon-theme.folders.theme": "specific", + "explorer.fileNesting.enabled": true, + "explorer.fileNesting.patterns": { + ".env": ".env*", + "README.md": "*.md", + "discloud.config": ".discloudignore", + "constants.json": "emojis.json, emojis.dev.json", + "package.json": "*lock.json, *.lock, *lock.yaml, *.lockb, *config.json, *config.ts, .eslintrc.json, .gitignore, biome.json, .nvmrc" + }, + "typescript.suggest.autoImports": true, + "window.title": "${rootName}${separator}", + "files.autoSave": "afterDelay", + "typescript.experimental.expandableHover": true, + "files.readonlyInclude": { + "src/discord/base/**": true + }, + "editor.codeActionsOnSave": { + "source.addMissingImports.ts": "explicit", + "source.organizeImports": "explicit", + "source.removeUnusedImports": "explicit" + } +} diff --git a/discord-read/BINDINGS_ANALYSIS.md b/discord-read/BINDINGS_ANALYSIS.md index bd6afb15..4a531259 100644 --- a/discord-read/BINDINGS_ANALYSIS.md +++ b/discord-read/BINDINGS_ANALYSIS.md @@ -3,8 +3,10 @@ ## 📦 Bindings Disponíveis no Mesh ### 1. **EVENT_BUS** (`@deco/event-bus`) + **Propósito**: Pub/Sub de eventos entre MCPs **Tools**: + - `EVENT_PUBLISH` - Publicar eventos - `EVENT_SUBSCRIBE` - Criar subscrições - `EVENT_UNSUBSCRIBE` - Remover subscrições @@ -13,6 +15,7 @@ - `EVENT_ACK` - Confirmar entrega **Status no Discord**: ⚠️ **Declarado mas não usado** + - ✅ Declarado no `StateSchema` - ✅ Handler básico implementado (só loga) - ❌ Não publica eventos @@ -21,8 +24,10 @@ --- ### 2. **EVENT_SUBSCRIBER** (`@deco/event-subscriber`) + **Propósito**: Receber eventos publicados por outros MCPs **Tools**: + - `ON_EVENTS` - Handler para processar eventos recebidos **Status no Discord**: ❌ **Não implementado** @@ -30,8 +35,10 @@ --- ### 3. **OBJECT_STORAGE** (`@deco/object-storage`) + **Propósito**: Armazenamento S3-compatible de arquivos **Tools**: + - `LIST_OBJECTS` - Listar objetos - `GET_OBJECT_METADATA` - Metadados do objeto - `GET_PRESIGNED_URL` - URL pré-assinada para download @@ -44,8 +51,10 @@ --- ### 4. **COLLECTIONS** (`@deco/collections`) + **Propósito**: CRUD padronizado para entidades (TanStack DB compatible) **Tools** (por collection): + - `COLLECTION_{NAME}_LIST` - Listar com filtros - `COLLECTION_{NAME}_GET` - Obter por ID - `COLLECTION_{NAME}_CREATE` - Criar @@ -58,8 +67,10 @@ --- ### 5. **WORKFLOW** (`@deco/workflow`) + **Propósito**: Orquestração de workflows multi-step **Tools**: + - `WORKFLOW_RUN` - Executar workflow - `WORKFLOW_GET` - Obter status - `WORKFLOW_CANCEL` - Cancelar execução @@ -69,8 +80,10 @@ --- ### 6. **PROMPT** (`@deco/prompt`) + **Propósito**: Gerenciamento de prompts reutilizáveis **Tools**: + - `PROMPT_GET` - Obter prompt - `PROMPT_LIST` - Listar prompts disponíveis @@ -79,8 +92,10 @@ --- ### 7. **ASSISTANT** (`@deco/assistant`) + **Propósito**: Agentes de IA reutilizáveis **Tools**: + - `ASSISTANT_RUN` - Executar assistente **Status no Discord**: ❌ **Não implementado** @@ -88,22 +103,27 @@ --- ### 8. **LANGUAGE_MODEL** (`@deco/language-model`) + **Propósito**: Acesso a modelos LLM **Status no Discord**: ✅ **Usado** + - Declarado no `StateSchema` - Usado via integração com Decopilot --- ### 9. **MODEL_PROVIDER** / **AGENT** + **Propósito**: Provider de modelos e configuração de agente **Status no Discord**: ✅ **Usado** + - Declarado no `StateSchema` - Usado para configurar modelo e agente --- ### 10. **CONNECTION** (`@deco/connection`) + **Propósito**: Metadados da conexão MCP **Status no Discord**: ✅ **Declarado** @@ -114,32 +134,35 @@ ### Alta Prioridade 🔴 #### 1. **EVENT_BUS** - Publicar Eventos do Discord + **Por quê**: Permitir que outros MCPs reajam a eventos do Discord **Eventos para publicar**: + ```typescript // Mensagens -"discord.message.created" -"discord.message.deleted" -"discord.message.updated" +"discord.message.created"; +"discord.message.deleted"; +"discord.message.updated"; // Membros -"discord.member.joined" -"discord.member.left" -"discord.member.banned" -"discord.member.role_added" -"discord.member.role_removed" +"discord.member.joined"; +"discord.member.left"; +"discord.member.banned"; +"discord.member.role_added"; +"discord.member.role_removed"; // Canais -"discord.channel.created" -"discord.channel.deleted" +"discord.channel.created"; +"discord.channel.deleted"; // Reações -"discord.reaction.added" -"discord.reaction.removed" +"discord.reaction.added"; +"discord.reaction.removed"; ``` **Exemplo de implementação**: + ```typescript // No messageHandler.ts await env.MESH_REQUEST_CONTEXT?.state?.EVENT_BUS.EVENT_PUBLISH({ @@ -156,6 +179,7 @@ await env.MESH_REQUEST_CONTEXT?.state?.EVENT_BUS.EVENT_PUBLISH({ ``` **Benefícios**: + - ✅ Outros MCPs podem reagir a mensagens do Discord - ✅ Automações cross-MCP (ex: Notion cria nota quando mencionado) - ✅ Audit trail completo @@ -163,9 +187,11 @@ await env.MESH_REQUEST_CONTEXT?.state?.EVENT_BUS.EVENT_PUBLISH({ --- #### 2. **EVENT_SUBSCRIBER** - Reagir a Eventos Externos + **Por quê**: Permitir que o Discord responda a eventos de outros MCPs **Exemplos de uso**: + ```typescript // Notificar no Discord quando: "notion.page.created" → Enviar mensagem no canal #updates @@ -175,6 +201,7 @@ await env.MESH_REQUEST_CONTEXT?.state?.EVENT_BUS.EVENT_PUBLISH({ ``` **Implementação**: + - Criar handler `ON_EVENTS` no `main.ts` - Processar eventos recebidos - Enviar mensagens apropriadas no Discord @@ -184,15 +211,18 @@ await env.MESH_REQUEST_CONTEXT?.state?.EVENT_BUS.EVENT_PUBLISH({ ### Média Prioridade 🟡 #### 3. **OBJECT_STORAGE** - Gerenciar Anexos + **Por quê**: Armazenar e gerenciar arquivos enviados no Discord **Use cases**: + - Fazer upload de arquivos grandes - Gerar links pré-assinados para downloads - Arquivar attachments importantes - Backup de imagens/arquivos **Exemplo**: + ```typescript // Tool: DISCORD_UPLOAD_FILE // 1. Recebe file do Discord @@ -204,18 +234,21 @@ await env.MESH_REQUEST_CONTEXT?.state?.EVENT_BUS.EVENT_PUBLISH({ --- #### 4. **COLLECTIONS** - CRUD de Entidades Discord + **Por quê**: Interface padronizada para gerenciar dados do Discord **Collections sugeridas**: + ```typescript -COLLECTION_GUILDS // Gerenciar servers -COLLECTION_CHANNELS // Gerenciar canais -COLLECTION_MEMBERS // Gerenciar membros -COLLECTION_ROLES // Gerenciar cargos -COLLECTION_MESSAGES // Histórico de mensagens +COLLECTION_GUILDS; // Gerenciar servers +COLLECTION_CHANNELS; // Gerenciar canais +COLLECTION_MEMBERS; // Gerenciar membros +COLLECTION_ROLES; // Gerenciar cargos +COLLECTION_MESSAGES; // Histórico de mensagens ``` **Benefícios**: + - ✅ Interface consistente com TanStack DB - ✅ Filtros e busca padronizados - ✅ Paginação automática @@ -226,9 +259,11 @@ COLLECTION_MESSAGES // Histórico de mensagens ### Baixa Prioridade 🟢 #### 5. **WORKFLOW** - Automações Complexas + **Por quê**: Orquestrar workflows multi-step no Discord **Exemplos**: + ```typescript // Workflow: "onboard-new-member" 1. Detectar novo membro @@ -248,9 +283,11 @@ COLLECTION_MESSAGES // Histórico de mensagens --- #### 6. **PROMPT** - Gerenciar Prompts do Bot + **Por quê**: Centralizar e versionar system prompts **Use cases**: + - Diferentes prompts por canal - Versioning de prompts - A/B testing de comportamentos @@ -259,9 +296,11 @@ COLLECTION_MESSAGES // Histórico de mensagens --- #### 7. **ASSISTANT** - Sub-agentes Especializados + **Por quê**: Delegar tarefas para agentes especializados **Exemplos**: + ```typescript // ASSISTANT: "moderator" // - Especializado em moderação @@ -281,40 +320,44 @@ COLLECTION_MESSAGES // Histórico de mensagens ## 📊 Resumo de Status -| Binding | Declarado | Implementado | Prioridade | -|---------|-----------|--------------|------------| -| EVENT_BUS | ✅ | ⚠️ (parcial) | 🔴 Alta | -| EVENT_SUBSCRIBER | ❌ | ❌ | 🔴 Alta | -| OBJECT_STORAGE | ❌ | ❌ | 🟡 Média | -| COLLECTIONS | ❌ | ❌ | 🟡 Média | -| WORKFLOW | ❌ | ❌ | 🟢 Baixa | -| PROMPT | ❌ | ❌ | 🟢 Baixa | -| ASSISTANT | ❌ | ❌ | 🟢 Baixa | -| LANGUAGE_MODEL | ✅ | ✅ | ✅ Usado | -| MODEL_PROVIDER | ✅ | ✅ | ✅ Usado | -| AGENT | ✅ | ✅ | ✅ Usado | -| CONNECTION | ✅ | ✅ | ✅ Usado | +| Binding | Declarado | Implementado | Prioridade | +| ---------------- | --------- | ------------ | ---------- | +| EVENT_BUS | ✅ | ⚠️ (parcial) | 🔴 Alta | +| EVENT_SUBSCRIBER | ❌ | ❌ | 🔴 Alta | +| OBJECT_STORAGE | ❌ | ❌ | 🟡 Média | +| COLLECTIONS | ❌ | ❌ | 🟡 Média | +| WORKFLOW | ❌ | ❌ | 🟢 Baixa | +| PROMPT | ❌ | ❌ | 🟢 Baixa | +| ASSISTANT | ❌ | ❌ | 🟢 Baixa | +| LANGUAGE_MODEL | ✅ | ✅ | ✅ Usado | +| MODEL_PROVIDER | ✅ | ✅ | ✅ Usado | +| AGENT | ✅ | ✅ | ✅ Usado | +| CONNECTION | ✅ | ✅ | ✅ Usado | --- ## 🚀 Próximos Passos ### Fase 1: Event Bus (1-2 dias) + 1. Implementar `EVENT_PUBLISH` para eventos principais do Discord 2. Adicionar tools para gerenciar subscrições 3. Documentar eventos disponíveis ### Fase 2: Event Subscriber (1 dia) + 1. Implementar handler `ON_EVENTS` 2. Criar lógica para processar eventos externos 3. Suportar filtros e regras de roteamento ### Fase 3: Object Storage (2-3 dias) + 1. Integrar com binding de Object Storage 2. Criar tools para upload/download 3. Implementar cache de attachments ### Fase 4: Collections (3-4 dias) + 1. Definir schemas para entidades principais 2. Implementar CRUD via Collections binding 3. Migrar queries atuais para usar Collections @@ -324,22 +367,25 @@ COLLECTION_MESSAGES // Histórico de mensagens ## 💡 Benefícios da Implementação ### Interoperabilidade + - ✅ Discord pode reagir a eventos de qualquer MCP - ✅ Outros MCPs podem reagir a eventos do Discord - ✅ Workflows cross-platform ### Consistência + - ✅ Interface padronizada (COLLECTIONS) - ✅ Event schema consistente (CloudEvents) - ✅ Melhor DX para desenvolvedores ### Escalabilidade + - ✅ Event-driven architecture - ✅ Desacoplamento de MCPs - ✅ Fácil adicionar novas integrações ### Observabilidade + - ✅ Audit trail completo via events - ✅ Debugging facilitado - ✅ Analytics e métricas - diff --git a/discord-read/README.md b/discord-read/README.md index d235d5ac..379d7ae0 100644 --- a/discord-read/README.md +++ b/discord-read/README.md @@ -1,5 +1,5 @@ -# Discord MCP - Bot com Configuração Persistente - +# Discord MCP - Bot com Configuração Persistente + > **Status**: Production-ready | **Last Deploy**: 2026-02-03 Bot Discord avançado com suporte a IA, webhooks, slash commands, event bus, indexação de mensagens e gerenciamento completo de servidores. @@ -20,7 +20,7 @@ await DISCORD_SAVE_CONFIG({ modelId: "gpt-4", // opcional systemPrompt: "Você é um bot útil do Discord...", // opcional meshApiKey: "mesh_key_...", // opcional - API key persistente (recomendado) - discordPublicKey: "your_discord_public_key" // opcional - para webhooks + discordPublicKey: "your_discord_public_key", // opcional - para webhooks }); ``` @@ -36,6 +36,7 @@ await DISCORD_BOT_START({}); ``` O bot agora vai: + - ✅ Carregar a configuração do Supabase automaticamente - ✅ Conectar no Discord Gateway usando o token salvo - ✅ Responder apenas nos guilds autorizados (se configurado) @@ -56,6 +57,7 @@ await DISCORD_BOT_STOP({}); ## 📋 Tools Disponíveis ### Configuração + - `DISCORD_SAVE_CONFIG` - Salvar configuração no Supabase - `DISCORD_LOAD_CONFIG` - Carregar configuração salva - `DISCORD_DELETE_CONFIG` - Remover configuração @@ -64,11 +66,13 @@ await DISCORD_BOT_STOP({}); - `DISCORD_CONFIG_CLEAR_CACHE` - Limpar cache ### Controle do Bot + - `DISCORD_BOT_START` - Iniciar o bot - `DISCORD_BOT_STOP` - Parar o bot - `DISCORD_BOT_STATUS` - Status do bot ### Discord API - Mensagens + - `DISCORD_SEND_MESSAGE` - Enviar mensagem - `DISCORD_GET_CHANNEL_MESSAGES` - Buscar mensagens - `DISCORD_EDIT_MESSAGE` - Editar mensagem @@ -77,6 +81,7 @@ await DISCORD_BOT_STOP({}); - `DISCORD_SEARCH_USER_MENTIONS` - Buscar menções de usuário ### Discord API - Servidores e Canais + - `DISCORD_GET_GUILDS` - Listar servidores - `DISCORD_GET_CHANNELS` - Listar canais - `DISCORD_CREATE_CHANNEL` - Criar canal @@ -84,6 +89,7 @@ await DISCORD_BOT_STOP({}); - `DISCORD_DELETE_CHANNEL` - Deletar canal ### Discord API - Membros e Roles + - `DISCORD_GET_MEMBERS` - Listar membros - `DISCORD_EDIT_MEMBER` - Editar membro - `DISCORD_BAN_MEMBER` - Banir membro @@ -93,6 +99,7 @@ await DISCORD_BOT_STOP({}); - `DISCORD_ASSIGN_ROLE` - Atribuir role ### Database - Análise e Contexto + - `DISCORD_QUERY_MESSAGES` - Consultar mensagens indexadas - `DISCORD_QUERY_GUILDS` - Consultar servidores - `DISCORD_MESSAGE_STATS` - Estatísticas de mensagens @@ -100,6 +107,7 @@ await DISCORD_BOT_STOP({}); - `DISCORD_SET_CHANNEL_AUTO_RESPOND` - Configurar resposta automática ### Slash Commands (Webhooks) + - `DISCORD_REGISTER_SLASH_COMMAND` - Registrar comando /slash no Discord - `DISCORD_DELETE_SLASH_COMMAND` - Remover comando do Discord e banco - `DISCORD_LIST_SLASH_COMMANDS` - Listar comandos (database/discord/both) @@ -109,10 +117,12 @@ await DISCORD_BOT_STOP({}); ## 🔧 Configuração do Supabase ### 1. Criar Projeto no Supabase + - Acesse https://supabase.com - Crie um novo projeto ### 2. Criar Tabela + Execute o SQL no Supabase SQL Editor: ```sql @@ -154,6 +164,7 @@ O bot suporta Discord Interactions via webhooks para criar comandos /slash nativ ### Configurar Webhook URL No Discord Developer Portal: + 1. Vá em "General Information" 2. Configure "Interactions Endpoint URL": `https://seu-mcp.deco.site/discord/interactions/seu-connection-id` 3. Copie a "Public Key" e salve na configuração @@ -165,7 +176,7 @@ No Discord Developer Portal: await DISCORD_REGISTER_SLASH_COMMAND({ commandName: "start", description: "Iniciar o bot se ele estiver offline", - enabled: true + enabled: true, }); // 2. Registrar comando guild-specific com opções @@ -178,25 +189,25 @@ await DISCORD_REGISTER_SLASH_COMMAND({ type: "STRING", name: "message", description: "Mensagem para repetir", - required: true - } - ] + required: true, + }, + ], }); // 3. Listar comandos do banco de dados await DISCORD_LIST_SLASH_COMMANDS({ - source: "database" + source: "database", }); // 4. Listar comandos diretamente do Discord (via API) await DISCORD_LIST_SLASH_COMMANDS({ source: "discord", - guildId: "123456789" // opcional + guildId: "123456789", // opcional }); // 5. Comparar banco vs Discord (detectar diferenças) await DISCORD_LIST_SLASH_COMMANDS({ - source: "both" + source: "both", }); // Retorna: commands com inDatabase/inDiscord flags @@ -204,29 +215,29 @@ await DISCORD_LIST_SLASH_COMMANDS({ await DISCORD_SYNC_SLASH_COMMANDS({ action: "import", guildId: "123456789", - dryRun: true // preview antes de aplicar + dryRun: true, // preview antes de aplicar }); // 7. Sincronizar: limpar comandos órfãos do banco await DISCORD_SYNC_SLASH_COMMANDS({ action: "clean", - dryRun: false // aplicar mudanças + dryRun: false, // aplicar mudanças }); // 8. Sincronização completa (import + clean) await DISCORD_SYNC_SLASH_COMMANDS({ - action: "full-sync" + action: "full-sync", }); // 9. Desativar comando (sem deletar) await DISCORD_TOGGLE_SLASH_COMMAND({ commandId: "cmd-uuid-from-db", - enabled: false + enabled: false, }); // 10. Deletar comando do Discord e banco await DISCORD_DELETE_SLASH_COMMAND({ - commandId: "cmd-uuid-from-db" + commandId: "cmd-uuid-from-db", }); ``` @@ -235,6 +246,7 @@ await DISCORD_DELETE_SLASH_COMMAND({ O bot publica eventos para o Mesh Event Bus automaticamente: **Eventos Publicados:** + - `discord.message.created` - Nova mensagem - `discord.message.deleted` - Mensagem deletada - `discord.message.updated` - Mensagem editada @@ -269,6 +281,7 @@ bun run build:start ## 🎯 Recursos ### Core + - ✅ **Configuração Persistente** - Token e settings salvos no Supabase - ✅ **API Key Persistente** - Nunca expira, elimina problemas de sessão - ✅ **Multi-tenant** - Suporta múltiplas conexões com configurações diferentes @@ -276,17 +289,20 @@ bun run build:start - ✅ **Guilds Autorizados** - Controle quais servidores podem usar o bot ### IA e Automação + - ✅ **IA Integrada** - Suporte a múltiplos modelos (GPT-4, Claude, etc) - ✅ **Auto-respond** - Canais podem responder automaticamente sem mencionar o bot - ✅ **Contexto por Canal** - System prompts personalizados por canal - ✅ **Indexação Automática** - Todas as mensagens indexadas no Supabase ### Webhooks e Interatividade + - ✅ **Slash Commands** - Comandos /nativos do Discord via webhooks - ✅ **Webhook Verification** - Suporte completo ao Discord Interactions - ✅ **Event Bus** - Publica eventos de Discord para outros MCPs ### Gerenciamento + - ✅ **Gerenciamento Completo** - Mensagens, canais, roles, membros, etc - ✅ **Busca Avançada** - Buscar menções de usuários com contexto de threads - ✅ **Análise de Dados** - Estatísticas de mensagens, atividade, etc @@ -303,16 +319,16 @@ A segurança da tabela `discord_connections` é garantida por **não criar tools ### Tabela de Permissões -| Tabela | Código Interno | Tools MCP | Regra | -|----------------------------|----------------|-----------|-------| +| Tabela | Código Interno | Tools MCP | Regra | +| -------------------------- | -------------- | --------------- | ------------------ | | `discord_connections` | ✅ Acesso | 🔒 **PROIBIDO** | NUNCA criar tools! | -| `discord_message` | ✅ Acesso | ✅ Read/Write | OK | -| `guilds` | ✅ Acesso | ✅ Read/Write | OK | -| `discord_channel` | ✅ Acesso | ✅ Read/Write | OK | -| `discord_member` | ✅ Acesso | ✅ Read/Write | OK | -| `discord_message_reaction` | ✅ Acesso | ✅ Read/Write | OK | -| `discord_audit_log` | ✅ Acesso | ✅ Write only | OK | -| `discord_voice_state` | ✅ Acesso | ✅ Read/Write | OK | +| `discord_message` | ✅ Acesso | ✅ Read/Write | OK | +| `guilds` | ✅ Acesso | ✅ Read/Write | OK | +| `discord_channel` | ✅ Acesso | ✅ Read/Write | OK | +| `discord_member` | ✅ Acesso | ✅ Read/Write | OK | +| `discord_message_reaction` | ✅ Acesso | ✅ Read/Write | OK | +| `discord_audit_log` | ✅ Acesso | ✅ Write only | OK | +| `discord_voice_state` | ✅ Acesso | ✅ Read/Write | OK | ### Scripts de Segurança @@ -337,6 +353,7 @@ Execute o script SQL para criar as tabelas com RLS: ## 📖 Documentação Completa Para mais detalhes, veja: + - `SUPABASE_SETUP.md` - Guia completo de setup do Supabase - `server/prompts/system.ts` - System prompt e guia de uso @@ -353,7 +370,7 @@ await DISCORD_SAVE_CONFIG({ authorizedGuilds: ["987654321"], modelProviderId: "openai", modelId: "gpt-4", - discordPublicKey: "abc123..." // Para webhooks + discordPublicKey: "abc123...", // Para webhooks }); // 3. Iniciar bot @@ -364,27 +381,27 @@ await DISCORD_BOT_START({}); await DISCORD_SET_CHANNEL_AUTO_RESPOND({ guildId: "987654321", channelId: "123456789", - autoRespond: true + autoRespond: true, }); // 5. Registrar slash command await DISCORD_REGISTER_SLASH_COMMAND({ commandName: "help", commandDescription: "Mostrar ajuda", - toolId: "DISCORD_SEND_MESSAGE" + toolId: "DISCORD_SEND_MESSAGE", }); // 6. Enviar mensagem await DISCORD_SEND_MESSAGE({ channelId: "123456789", - content: "Bot online! 🤖" + content: "Bot online! 🤖", }); // 7. Buscar menções de um usuário await DISCORD_SEARCH_USER_MENTIONS({ guild_id: "987654321", user_id: "111222333", - days: 7 + days: 7, }); // 8. Verificar status @@ -397,11 +414,13 @@ await DISCORD_BOT_STOP({}); ## 🔒 API Key vs Session Token ### Session Token (Padrão) + - ❌ Expira após algumas horas - ❌ Requer cliques em "Save" no Mesh periodicamente - ❌ Causa erro "Organization context is required" ### API Key Persistente (Recomendado) + - ✅ Nunca expira - ✅ Configurar uma vez e esquecer - ✅ Sem erros de autenticação @@ -412,4 +431,3 @@ await DISCORD_BOT_STOP({}); ## 📝 License MIT - diff --git a/discord-read/SECURITY_ARCHITECTURE.md b/discord-read/SECURITY_ARCHITECTURE.md index b26c0a53..995103ee 100644 --- a/discord-read/SECURITY_ARCHITECTURE.md +++ b/discord-read/SECURITY_ARCHITECTURE.md @@ -11,6 +11,7 @@ O Discord MCP implementa uma arquitetura de segurança em camadas para proteger **Usado por:** Tools MCP expostas para o agente de IA **Características:** + - ✅ RLS (Row Level Security) **HABILITADO** - ✅ Acesso READ/WRITE a tabelas operacionais - ❌ **BLOQUEADO** para `discord_connections` (sem RLS policies) @@ -18,7 +19,7 @@ O Discord MCP implementa uma arquitetura de segurança em camadas para proteger **Tabelas Acessíveis:** | Tabela | READ | INSERT | UPDATE | DELETE | -|----------------------------|------|--------|--------|--------| +| -------------------------- | ---- | ------ | ------ | ------ | | `discord_message` | ✅ | ✅ | ✅ | ❌ | | `guilds` | ✅ | ✅ | ✅ | ❌ | | `discord_channel` | ✅ | ✅ | ✅ | ❌ | @@ -29,6 +30,7 @@ O Discord MCP implementa uma arquitetura de segurança em camadas para proteger | `discord_audit_log` | ❌ | ✅ | ❌ | ❌ | **Variável de Ambiente:** + ```bash export SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... ``` @@ -38,6 +40,7 @@ export SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... **Usado por:** Código interno do MCP (funções em `supabase-client.ts`) **Características:** + - ✅ **Bypassa RLS** (Row Level Security desabilitado) - ✅ Acesso completo a **TODAS** as tabelas - ✅ Acesso exclusivo a `discord_connections` @@ -45,12 +48,13 @@ export SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... **Tabelas Acessíveis:** -| Tabela | Acesso | Uso | -|-----------------------|----------------|----------------------------------------| -| `discord_connections` | ✅ **Completo** | Tokens, credenciais, configurações | -| Todas as outras | ✅ Completo | Fallback para operações internas | +| Tabela | Acesso | Uso | +| --------------------- | --------------- | ---------------------------------- | +| `discord_connections` | ✅ **Completo** | Tokens, credenciais, configurações | +| Todas as outras | ✅ Completo | Fallback para operações internas | **Variável de Ambiente:** + ```bash export SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... ``` @@ -60,12 +64,14 @@ export SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... ### Por que proteger? A tabela `discord_connections` contém: + - `bot_token` - Token do bot Discord (acesso completo ao bot) - `mesh_token` - Token de autenticação do Mesh - `organization_id` - Identificador da organização - Outras configurações sensíveis **Se um agente de IA tivesse acesso direto:** + - ❌ Poderia ler tokens de outros bots - ❌ Poderia modificar configurações de segurança - ❌ Poderia comprometer outras organizações @@ -78,7 +84,7 @@ A tabela `discord_connections` contém: ```sql -- ❌ Isso vai falhar (sem policies) -SELECT * FROM discord_connections; +SELECT * FROM discord_connections; -- Error: new row violates row-level security policy ``` @@ -87,9 +93,7 @@ SELECT * FROM discord_connections; ```typescript // ✅ Isso funciona (service client bypassa RLS) const client = getSupabaseServiceClient(); -const { data } = await client - .from('discord_connections') - .select('*'); +const { data } = await client.from("discord_connections").select("*"); ``` ## 📋 Implementação no Código @@ -101,7 +105,7 @@ const { data } = await client export function getSupabaseClient(): SupabaseClient | null { return createClient( process.env.SUPABASE_URL, - process.env.SUPABASE_ANON_KEY // ⚠️ RLS habilitado + process.env.SUPABASE_ANON_KEY, // ⚠️ RLS habilitado ); } @@ -109,17 +113,14 @@ export function getSupabaseClient(): SupabaseClient | null { export function getSupabaseServiceClient(): SupabaseClient | null { return createClient( process.env.SUPABASE_URL, - process.env.SUPABASE_SERVICE_ROLE_KEY // ✅ Bypassa RLS + process.env.SUPABASE_SERVICE_ROLE_KEY, // ✅ Bypassa RLS ); } // ✅ Acessa discord_connections com SERVICE client export async function loadConnectionConfig(connectionId: string) { const client = getSupabaseServiceClient(); // ← SERVICE_ROLE - return await client - .from('discord_connections') - .select('*') - .eq('connection_id', connectionId); + return await client.from("discord_connections").select("*").eq("connection_id", connectionId); } ``` @@ -132,16 +133,12 @@ export const createDatabaseRunSQLTool = (env: Env) => execute: async ({ context }) => { // ✅ Usa ANON client (RLS habilitado) const client = getSupabaseClient(); // ← ANON KEY - + // ❌ Não pode acessar discord_connections - const { data } = await client - .from('discord_connections') - .select('*'); // Error: insufficient privilege - + const { data } = await client.from("discord_connections").select("*"); // Error: insufficient privilege + // ✅ Pode acessar outras tabelas - const { data } = await client - .from('discord_message') - .select('*'); // OK! + const { data } = await client.from("discord_message").select("*"); // OK! }, }); ``` @@ -252,20 +249,16 @@ export SUPABASE_URL=https://seu-projeto.supabase.co ```typescript // Teste 1: Tool não consegue acessar discord_connections const client = getSupabaseClient(); // ANON -const { data, error } = await client - .from('discord_connections') - .select('*'); +const { data, error } = await client.from("discord_connections").select("*"); -console.log(error); +console.log(error); // ❌ Error: new row violates row-level security policy // Teste 2: Código interno consegue const serviceClient = getSupabaseServiceClient(); // SERVICE_ROLE -const { data, error } = await serviceClient - .from('discord_connections') - .select('*'); +const { data, error } = await serviceClient.from("discord_connections").select("*"); -console.log(data); +console.log(data); // ✅ [{ connection_id: '...', bot_token: '...' }] ``` @@ -286,4 +279,3 @@ console.log(data); - `server/lib/supabase-client.ts` - Implementação dos clientes - `server/tools/database.ts` - Tools de acesso ao banco - `README.md` - Documentação geral - diff --git a/discord-read/SUPABASE_RLS_SECURITY.md b/discord-read/SUPABASE_RLS_SECURITY.md index 86f4455d..19da6e88 100644 --- a/discord-read/SUPABASE_RLS_SECURITY.md +++ b/discord-read/SUPABASE_RLS_SECURITY.md @@ -55,21 +55,21 @@ DECLARE BEGIN -- Get connection_id from current settings conn_id := current_setting('app.connection_id', true); - + IF conn_id IS NULL THEN RETURN FALSE; END IF; - + -- Get authorized guilds for this connection SELECT authorized_guilds INTO auth_guilds FROM discord_connections WHERE connection_id = conn_id; - + -- If no authorized guilds or empty array, allow all IF auth_guilds IS NULL OR array_length(auth_guilds, 1) IS NULL THEN RETURN TRUE; END IF; - + -- Check if guild is in authorized list RETURN check_guild_id = ANY(auth_guilds); END; @@ -163,11 +163,11 @@ BEGIN SELECT guild_id INTO msg_guild_id FROM discord_message WHERE id = check_message_id; - + IF msg_guild_id IS NULL THEN RETURN FALSE; END IF; - + RETURN is_guild_authorized(msg_guild_id); END; $$ LANGUAGE plpgsql SECURITY DEFINER; @@ -215,14 +215,14 @@ CREATE POLICY "Delete channel context from authorized guilds" ```typescript // No código do MCP, antes de fazer query: -await supabaseClient.rpc('set_config', { - key: 'app.connection_id', - value: connectionId +await supabaseClient.rpc("set_config", { + key: "app.connection_id", + value: connectionId, }); -await supabaseClient.rpc('set_config', { - key: 'app.organization_id', - value: organizationId +await supabaseClient.rpc("set_config", { + key: "app.organization_id", + value: organizationId, }); ``` @@ -279,4 +279,3 @@ SELECT * FROM discord_message; - ✅ **Proteção automática** - não precisa lembrar de filtrar - ✅ **Audit trail** - logs do Supabase mostram acessos - ✅ **Zero trust** - mesmo se alguém acessa o banco, RLS protege - diff --git a/discord-read/SUPABASE_SETUP.md b/discord-read/SUPABASE_SETUP.md index a7d95ab7..48d4f679 100644 --- a/discord-read/SUPABASE_SETUP.md +++ b/discord-read/SUPABASE_SETUP.md @@ -36,6 +36,7 @@ await client.from("guilds").select("*").eq("id", guildId); ### 2. Obter Credenciais No dashboard do Supabase: + - Settings > API - Copie a `URL` e a `anon public` key @@ -230,6 +231,7 @@ CREATE INDEX IF NOT EXISTS idx_discord_channel_context_enabled ON discord_channe ## 🔄 Status da Migração ### ✅ Concluído + - [x] Cliente Supabase criado (`server/lib/supabase-client.ts`) - [x] Dependência `@supabase/supabase-js` adicionada - [x] Binding `DATABASE` removido do `app.json` @@ -272,7 +274,7 @@ await DISCORD_SAVE_CONFIG({ commandPrefix: "!", modelProviderId: "openai-connection-id", modelId: "gpt-4", - systemPrompt: "You are a helpful Discord bot..." + systemPrompt: "You are a helpful Discord bot...", }); // 2. Carregar configuração (próximas vezes) @@ -283,6 +285,7 @@ const config = await DISCORD_LOAD_CONFIG({}); ``` ### ⚠️ Pendente + - [ ] Integrar configuração salva com bot-manager - [ ] Auto-carregar configuração na inicialização - [ ] Validar guilds autorizados antes de responder comandos @@ -316,11 +319,7 @@ $$; await runSQL("SELECT * FROM guilds WHERE id = ?", [guildId]); // Use: -const { data } = await client - .from("guilds") - .select("*") - .eq("id", guildId) - .single(); +const { data } = await client.from("guilds").select("*").eq("id", guildId).single(); ``` ## 🚀 Deploy @@ -342,4 +341,3 @@ curl $SUPABASE_URL/rest/v1/ \ ``` Deve retornar informações sobre as tabelas disponíveis. - diff --git a/discord-read/app.json b/discord-read/app.json index 33acc63a..5c0110ce 100644 --- a/discord-read/app.json +++ b/discord-read/app.json @@ -13,7 +13,21 @@ "metadata": { "categories": ["Communication", "Automation", "AI"], "official": false, - "tags": ["discord", "bot", "ai-agent", "streaming", "voice", "tts", "messages", "roles", "members", "server-management", "webhooks", "automation", "whisper"], + "tags": [ + "discord", + "bot", + "ai-agent", + "streaming", + "voice", + "tts", + "messages", + "roles", + "members", + "server-management", + "webhooks", + "automation", + "whisper" + ], "short_description": "Discord bot with AI streaming responses, voice channel support (TTS/STT), and server management tools.", "mesh_description": "The Discord MCP provides comprehensive integration with Discord servers, enabling AI agents to manage and automate Discord communities. Key features include: **Streaming AI Responses** - Real-time message editing shows responses as they generate, similar to ChatGPT. **Voice Channel Support** - Bot can join voice channels, listen to users via Whisper transcription, and respond with Text-to-Speech (TTS). **File Processing** - Supports images (vision), audio (Whisper), PDF, DOCX, and text files. **Message Indexing** - Tracks messages, reactions, edits, and deletions in PostgreSQL. **Moderation Tools** - Kick, ban, timeout, mute members; manage roles and channels. **Context Awareness** - Maintains conversation history for natural interactions. Works with @mentions, prefix commands, and DMs. Configurable streaming, context limits, and voice settings. Ideal for building AI-powered community assistants with multimodal capabilities." } diff --git a/discord-read/package.json b/discord-read/package.json index 6c7db7d5..7440baeb 100644 --- a/discord-read/package.json +++ b/discord-read/package.json @@ -1,9 +1,9 @@ { "name": "discord-read", "version": "1.0.0", - "type": "module", "private": true, "description": "Discord bot with message indexing, AI agent commands and reaction tracking - unified MCP server", + "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", "configure": "deco configure", diff --git a/discord-read/server/tools/discord/guilds.ts b/discord-read/server/tools/discord/guilds.ts index 60699c68..6e37e4ab 100644 --- a/discord-read/server/tools/discord/guilds.ts +++ b/discord-read/server/tools/discord/guilds.ts @@ -513,7 +513,11 @@ export const createBanMemberTool = (env: Env) => await discordAPI( env, `/guilds/${input.guild_id}/bans/${usersToBan[0]}`, - { method: "PUT", body, reason: input.reason }, + { + method: "PUT", + body, + reason: input.reason, + }, ); return { success: true, banned_count: 1, failed_count: 0 }; } diff --git a/discord-read/server/tools/discord/messages.ts b/discord-read/server/tools/discord/messages.ts index 7a4f1579..d0bd06b9 100644 --- a/discord-read/server/tools/discord/messages.ts +++ b/discord-read/server/tools/discord/messages.ts @@ -241,7 +241,10 @@ export const createDeleteMessageTool = (env: Env) => await discordAPI( env, `/channels/${input.channel_id}/messages/${idsToDelete[0]}`, - { method: "DELETE", reason: input.reason }, + { + method: "DELETE", + reason: input.reason, + }, ); return { success: true, deleted_count: 1, failed_count: 0 }; } @@ -258,7 +261,10 @@ export const createDeleteMessageTool = (env: Env) => await discordAPI( env, `/channels/${input.channel_id}/messages/${messageId}`, - { method: "DELETE", reason: input.reason }, + { + method: "DELETE", + reason: input.reason, + }, ); return messageId; }, @@ -603,7 +609,10 @@ export const createPurgeChannelMessagesTool = (env: Env) => await discordAPI( env, `/channels/${input.channel_id}/messages/${id}`, - { method: "DELETE", reason: input.reason }, + { + method: "DELETE", + reason: input.reason, + }, ); individualDeleted++; await new Promise((r) => setTimeout(r, 200)); @@ -621,7 +630,10 @@ export const createPurgeChannelMessagesTool = (env: Env) => await discordAPI( env, `/channels/${input.channel_id}/messages/${chunk[0]}`, - { method: "DELETE", reason: input.reason }, + { + method: "DELETE", + reason: input.reason, + }, ); individualDeleted++; } catch { @@ -635,7 +647,10 @@ export const createPurgeChannelMessagesTool = (env: Env) => await discordAPI( env, `/channels/${input.channel_id}/messages/${recentMessages[0].id}`, - { method: "DELETE", reason: input.reason }, + { + method: "DELETE", + reason: input.reason, + }, ); individualDeleted++; } catch { @@ -656,7 +671,10 @@ export const createPurgeChannelMessagesTool = (env: Env) => await discordAPI( env, `/channels/${input.channel_id}/messages/${msg.id}`, - { method: "DELETE", reason: input.reason }, + { + method: "DELETE", + reason: input.reason, + }, ); return msg.id; }, @@ -886,7 +904,10 @@ export const createPinMessageTool = (env: Env) => await discordAPI( env, `/channels/${input.channel_id}/pins/${input.message_id}`, - { method: "PUT", reason: input.reason }, + { + method: "PUT", + reason: input.reason, + }, ); return { success: true }; @@ -922,7 +943,10 @@ export const createUnpinMessageTool = (env: Env) => await discordAPI( env, `/channels/${input.channel_id}/pins/${input.message_id}`, - { method: "DELETE", reason: input.reason }, + { + method: "DELETE", + reason: input.reason, + }, ); return { success: true }; diff --git a/discord-read/tsconfig.json b/discord-read/tsconfig.json index ae4ab5c0..152392f8 100644 --- a/discord-read/tsconfig.json +++ b/discord-read/tsconfig.json @@ -2,9 +2,7 @@ "compilerOptions": { "target": "ES2022", "useDefineForClassFields": true, - "lib": [ - "ES2023" - ], + "lib": ["ES2023"], "module": "ESNext", "skipLibCheck": true, /* Bundler mode */ @@ -22,11 +20,7 @@ "noFallthroughCasesInSwitch": true, "noUncheckedSideEffectImports": true, /* Types */ - "types": [ - "@types/node" - ] + "types": ["@types/node"] }, - "include": [ - "server" - ] + "include": ["server"] } diff --git a/docs/LLM_BINDING.md b/docs/LLM_BINDING.md index 94e31557..384e9235 100644 --- a/docs/LLM_BINDING.md +++ b/docs/LLM_BINDING.md @@ -4,11 +4,11 @@ The **LLM Binding** is a standardized interface that enables MCP servers to expo This repository contains three implementations: -| MCP | Provider | Package | Auth | -|---|---|---|---| -| `openrouter/` | [OpenRouter](https://openrouter.ai) | `@decocms/openrouter` | OAuth PKCE via OpenRouter, or `OPENROUTER_API_KEY` env var | -| `google-gemini/` | [Google Gemini](https://ai.google.dev) | `google-gemini` | User-supplied API key via `Authorization` header | -| `deco-llm/` | Deco AI Gateway | `deco-llm` | Reuses OpenRouter tools + Deco Wallet billing | +| MCP | Provider | Package | Auth | +| ---------------- | -------------------------------------- | --------------------- | ---------------------------------------------------------- | +| `openrouter/` | [OpenRouter](https://openrouter.ai) | `@decocms/openrouter` | OAuth PKCE via OpenRouter, or `OPENROUTER_API_KEY` env var | +| `google-gemini/` | [Google Gemini](https://ai.google.dev) | `google-gemini` | User-supplied API key via `Authorization` header | +| `deco-llm/` | Deco AI Gateway | `deco-llm` | Reuses OpenRouter tools + Deco Wallet billing | --- @@ -77,8 +77,8 @@ Both `LLM_DO_STREAM` and `LLM_DO_GENERATE` return token usage information via th ```typescript { usage: { - promptTokens: number; // Number of tokens in the input - completionTokens: number; // Number of tokens in the output + promptTokens: number; // Number of tokens in the input + completionTokens: number; // Number of tokens in the output } } ``` @@ -90,7 +90,7 @@ Providers may include additional cost information in `providerMetadata`. For exa providerMetadata: { openrouter: { usage: { - cost: number; // Actual cost in USD (e.g., 0.00015) + cost: number; // Actual cost in USD (e.g., 0.00015) } } } @@ -102,13 +102,14 @@ Providers may include additional cost information in `providerMetadata`. For exa ```typescript { costs: { - input: number; // Cost per input token (USD), e.g., 0.000003 = $0.000003/token - output: number; // Cost per output token (USD) + input: number; // Cost per input token (USD), e.g., 0.000003 = $0.000003/token + output: number; // Cost per output token (USD) } } ``` Different providers return pricing in different formats from their APIs: + - **OpenRouter**: Returns per-token prices directly (e.g., "0.000003" = $0.000003/token) - **Google Gemini**: Returns prices per 1M tokens (e.g., "0.15" = $0.15/1M tokens), which the binding divides by 1,000,000 to normalize to per-token @@ -214,11 +215,11 @@ All three implementations share the following patterns: ## Key Dependencies -| Package | Purpose | -|---|---| -| `@decocms/bindings/llm` | Binding definitions and schemas (`LANGUAGE_MODEL_BINDING`, `ModelCollectionEntitySchema`) | -| `@decocms/runtime/bindings` | `streamToResponse` for converting AI SDK streams to HTTP responses | -| `@decocms/runtime/tools` | `createPrivateTool`, `createStreamableTool` | -| `@ai-sdk/provider` | AI SDK types (`LanguageModelV2StreamPart`, `APICallError`, `LanguageModelV2Usage`) | -| `@ai-sdk/google` | Google Gemini AI SDK provider | -| `@openrouter/ai-sdk-provider` | OpenRouter AI SDK provider | +| Package | Purpose | +| ----------------------------- | ----------------------------------------------------------------------------------------- | +| `@decocms/bindings/llm` | Binding definitions and schemas (`LANGUAGE_MODEL_BINDING`, `ModelCollectionEntitySchema`) | +| `@decocms/runtime/bindings` | `streamToResponse` for converting AI SDK streams to HTTP responses | +| `@decocms/runtime/tools` | `createPrivateTool`, `createStreamableTool` | +| `@ai-sdk/provider` | AI SDK types (`LanguageModelV2StreamPart`, `APICallError`, `LanguageModelV2Usage`) | +| `@ai-sdk/google` | Google Gemini AI SDK provider | +| `@openrouter/ai-sdk-provider` | OpenRouter AI SDK provider | diff --git a/docs/REPORTS_BINDING.md b/docs/REPORTS_BINDING.md index 5fe52522..d87b40ba 100644 --- a/docs/REPORTS_BINDING.md +++ b/docs/REPORTS_BINDING.md @@ -6,8 +6,8 @@ A connection is detected as reports-compatible when it exposes all **required** This repository contains one implementation: -| MCP | Backend | Auth | -|---|---|---| +| MCP | Backend | Auth | +| ---------------------- | ----------------------------------------------------------- | ----------------------- | | `github-repo-reports/` | Markdown files with YAML frontmatter in a GitHub repository | GitHub App OAuth (PKCE) | --- @@ -16,15 +16,15 @@ This repository contains one implementation: ### Required -| Tool | Purpose | -|---|---| +| Tool | Purpose | +| -------------- | -------------------------------------------- | | `REPORTS_LIST` | List available reports with optional filters | -| `REPORTS_GET` | Get a single report with full content | +| `REPORTS_GET` | Get a single report with full content | ### Optional -| Tool | Purpose | -|---|---| +| Tool | Purpose | +| ----------------------- | ------------------------------------------------------------------------- | | `REPORTS_UPDATE_STATUS` | Update the lifecycle status of a report (`unread` / `read` / `dismissed`) | Optional tools may be omitted. Consumers will skip the corresponding functionality when they are absent. @@ -53,10 +53,10 @@ Workflow state: "unread" | "read" | "dismissed" ``` -| Value | Meaning | -|---|---| -| `unread` | New report, not yet viewed. | -| `read` | Report has been viewed. | +| Value | Meaning | +| ----------- | ------------------------------------------ | +| `unread` | New report, not yet viewed. | +| `read` | Report has been viewed. | | `dismissed` | Report has been archived / marked as done. | #### MetricItem @@ -74,6 +74,7 @@ Workflow state: #### ReportSection (discriminated union on `type`) **Markdown section** + ```json { "type": "markdown", @@ -82,6 +83,7 @@ Workflow state: ``` **Metrics section** + ```json { "type": "metrics", @@ -91,6 +93,7 @@ Workflow state: ``` **Table section** + ```json { "type": "table", @@ -141,6 +144,7 @@ Lists available reports with optional filtering. - **Output**: `{ reports: ReportSummary[] }` Notes: + - Return all reports when no filters are provided. - Reports with `lifecycleStatus: "dismissed"` are considered archived; everything else is active. Reports with `lifecycleStatus: "unread"` (or no `lifecycleStatus`) are new. - Order reports by `updatedAt` descending (most recent first) unless the server has a more meaningful ordering. @@ -153,6 +157,7 @@ Retrieves a single report by ID with full sections. - **Output**: The full `Report` object (see schema above). Notes: + - Return an MCP error (`isError: true`) if the report ID is not found. - Sections are rendered in array order — put the most important information first. @@ -164,6 +169,7 @@ Updates the lifecycle status of a report. - **Output**: `{ success: boolean, message?: string }` Notes: + - Consumers call this automatically when a report is opened (sets `"read"`). - `"dismissed"` archives the report. Restoring from dismissed sets `"read"`. - If not implemented, lifecycle tracking is unavailable but the binding remains usable as a read-only viewer. @@ -173,6 +179,7 @@ Notes: ## Binding Detection A connection is considered reports-compatible when it exposes at minimum: + - `REPORTS_LIST` - `REPORTS_GET` @@ -184,15 +191,15 @@ Detection checks tool name presence (exact string match). No schema validation i Categories are free-form strings. Common conventions: -| Category | Use case | -|---|---| -| `performance` | Web vitals, bundle size, load times | -| `security` | Vulnerability scans, dependency audits | -| `accessibility` | WCAG compliance, axe-core results | -| `seo` | Meta tags, structured data, crawlability | -| `quality` | Code quality, test coverage, lint results | -| `uptime` | Health checks, availability monitoring | -| `compliance` | License audits, policy checks | +| Category | Use case | +| --------------- | ----------------------------------------- | +| `performance` | Web vitals, bundle size, load times | +| `security` | Vulnerability scans, dependency audits | +| `accessibility` | WCAG compliance, axe-core results | +| `seo` | Meta tags, structured data, crawlability | +| `quality` | Code quality, test coverage, lint results | +| `uptime` | Health checks, availability monitoring | +| `compliance` | License audits, policy checks | --- diff --git a/egnyte/README.md b/egnyte/README.md index 75274aad..b4fa693c 100644 --- a/egnyte/README.md +++ b/egnyte/README.md @@ -5,11 +5,13 @@ **Egnyte MCP** is a Model Context Protocol (MCP) server that provides AI assistants with secure access to Egnyte's file management platform for browsing, searching, and working with enterprise content. ### Purpose + - Browse and search files and folders stored in your Egnyte domain - Read and manage file metadata, permissions, and sharing settings - Enable AI-powered workflows over enterprise content with security and compliance in mind ### Key Features + - 📁 Browse and navigate your Egnyte folder structure - 🔍 Search files and content across your Egnyte domain - 🔒 Manage file permissions and sharing links diff --git a/egnyte/app.json b/egnyte/app.json index 07657f56..2a6110ad 100644 --- a/egnyte/app.json +++ b/egnyte/app.json @@ -13,7 +13,14 @@ "metadata": { "categories": ["Data", "Security"], "official": true, - "tags": ["egnyte", "file-management", "security", "ai-access", "remote-access", "secure-storage"], + "tags": [ + "egnyte", + "file-management", + "security", + "ai-access", + "remote-access", + "secure-storage" + ], "short_description": "Secure AI access, search, upload and file management in your Egnyte account", "mesh_description": "Provides secure access to Egnyte's file management system for AI applications. Enables AI models to securely search, upload, and manage files within an Egnyte account. Connects AI workflows to Egnyte's enterprise content collaboration platform with full security controls. Perfect for organizations that need AI-powered document management while maintaining enterprise-grade security and compliance." } diff --git a/event-bus/app.json b/event-bus/app.json index 4531591d..8b3306e0 100644 --- a/event-bus/app.json +++ b/event-bus/app.json @@ -60,18 +60,11 @@ "description": "Event payload" } }, - "required": [ - "specversion", - "id", - "source", - "type" - ] + "required": ["specversion", "id", "source", "type"] } } }, - "required": [ - "events" - ] + "required": ["events"] }, "outputSchema": { "type": "object", @@ -114,13 +107,11 @@ "description": "Re-deliver this event after this many ms" } }, - "required": [ - "success" - ] + "required": ["success"] } } } } } ] -} \ No newline at end of file +} diff --git a/exa-search/README.md b/exa-search/README.md index 747f89d4..34177401 100644 --- a/exa-search/README.md +++ b/exa-search/README.md @@ -39,5 +39,4 @@ https://mcp.exa.ai/mcp --- -*This MCP requires an active Exa API key to function.* - +_This MCP requires an active Exa API key to function._ diff --git a/exa-search/app.json b/exa-search/app.json index 5e96b3fa..604b118c 100644 --- a/exa-search/app.json +++ b/exa-search/app.json @@ -12,7 +12,17 @@ "metadata": { "categories": ["Search"], "official": true, - "tags": ["search", "ai-search", "semantic-search", "web-search", "research", "exa", "knowledge", "discovery", "nlp"], + "tags": [ + "search", + "ai-search", + "semantic-search", + "web-search", + "research", + "exa", + "knowledge", + "discovery", + "nlp" + ], "short_description": "AI-powered semantic search engine for finding high-quality content", "mesh_description": "Exa is an AI-powered search engine designed specifically for LLMs and AI applications, providing semantic search capabilities that understand the meaning and context of queries rather than just matching keywords. This official MCP enables you to search the web using natural language queries that understand intent, find similar content based on examples, and discover high-quality articles, research papers, and authoritative sources. Unlike traditional search engines, Exa uses neural embeddings to understand semantic relationships between concepts, allowing you to find content even when the exact keywords don't match. Search by similarity - provide a URL or text sample and find similar content across the web. Filter results by domain, publication date, content type, and authority scores. Access cleaned, structured content without ads, popups, or irrelevant information. Use Exa for research tasks like competitive analysis, market research, academic literature review, and trend discovery. Find expert opinions, technical documentation, case studies, and data-driven articles. The search engine is optimized for factual accuracy and source reliability, making it ideal for AI applications that need trustworthy information. Perfect for building RAG systems, research assistants, content discovery tools, and knowledge management applications that require high-quality web content with semantic understanding." } diff --git a/farmrio-reorder-collection-db/app.json b/farmrio-reorder-collection-db/app.json index 4a58e368..7b48b4f6 100644 --- a/farmrio-reorder-collection-db/app.json +++ b/farmrio-reorder-collection-db/app.json @@ -22,15 +22,9 @@ "icon": "https://assets.decocache.com/mcp/{uuid}/icon.png", "unlisted": true, "metadata": { - "categories": [ - "Productivity" - ], + "categories": ["Productivity"], "official": false, - "tags": [ - "postgres", - "private", - "farmrio" - ], + "tags": ["postgres", "private", "farmrio"], "short_description": "Private hosted PostgreSQL MCP for Farm Rio collection reorder reports", "mesh_description": "**Private Access** - Requires MCP_ACCESS_TOKEN configured in the connection state. **Hosted Database** - Uses internal managed PostgreSQL via deploy secret. **Collections & Reports** - Provides CRUD tools for collections and ranked report datasets." } diff --git a/farmrio-reorder-collection-db/package.json b/farmrio-reorder-collection-db/package.json index 57a1e1cc..65efd40a 100644 --- a/farmrio-reorder-collection-db/package.json +++ b/farmrio-reorder-collection-db/package.json @@ -1,8 +1,8 @@ { "name": "farmrio-reorder-collection-db", "version": "1.0.0", - "description": "PostgreSQL database for Farm Rio collection reorder reports", "private": true, + "description": "PostgreSQL database for Farm Rio collection reorder reports", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/flux/package.json b/flux/package.json index 052c0083..4a4912c2 100644 --- a/flux/package.json +++ b/flux/package.json @@ -1,8 +1,8 @@ { "name": "flux", "version": "1.0.0", - "description": "FLUX Image Generation MCP Server - Generate images with Black Forest Labs FLUX API", "private": true, + "description": "FLUX Image Generation MCP Server - Generate images with Black Forest Labs FLUX API", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/flux/tsconfig.json b/flux/tsconfig.json index 77db41f1..b4b02a45 100644 --- a/flux/tsconfig.json +++ b/flux/tsconfig.json @@ -29,7 +29,5 @@ "server/*": ["./server/*"] } }, - "include": [ - "server" - ] + "include": ["server"] } diff --git a/fusionauth/README.md b/fusionauth/README.md index 2e6b5d36..711b05ea 100644 --- a/fusionauth/README.md +++ b/fusionauth/README.md @@ -7,6 +7,7 @@ ### Purpose This MCP server allows client applications to: + - Query FusionAuth documentation for implementation guidance - Find code examples and configuration references for auth workflows - Access API documentation for users, applications, and tokens diff --git a/fusionauth/app.json b/fusionauth/app.json index 3ab7db2f..4c0c0416 100644 --- a/fusionauth/app.json +++ b/fusionauth/app.json @@ -13,7 +13,14 @@ "metadata": { "categories": ["Development", "Security"], "official": true, - "tags": ["fusionauth", "documentation", "identity", "authentication", "security", "knowledge-base"], + "tags": [ + "fusionauth", + "documentation", + "identity", + "authentication", + "security", + "knowledge-base" + ], "short_description": "Access FusionAuth identity and authentication documentation", "mesh_description": "Provides access to FusionAuth documentation. Enables users to retrieve and explore documentation content for the FusionAuth identity platform. Covers authentication, authorization, user management, single sign-on, and multi-factor authentication topics. Perfect for developers integrating FusionAuth into their applications and needing quick access to implementation guides and API references." } diff --git a/gemini-pro-vision/README.md b/gemini-pro-vision/README.md index 54e2229f..c41a43d3 100644 --- a/gemini-pro-vision/README.md +++ b/gemini-pro-vision/README.md @@ -7,15 +7,18 @@ MCP (Model Context Protocol) for image analysis using Google Gemini Pro Vision. This MCP offers three main tools for image analysis: ### 1. `analyze_image` - Image Analysis + Analyzes an image and answers questions about it. **Use cases:** + - Describe image content - Identify objects, people, places - Answer questions about the image - Context and emotion analysis **Example:** + ```json { "imageUrl": "https://example.com/image.jpg", @@ -25,36 +28,39 @@ Analyzes an image and answers questions about it. ``` ### 2. `compare_images` - Image Comparison + Compares multiple images and identifies differences or similarities. **Use cases:** + - Identify changes between design versions - Compare similar products - Verify visual consistency - Detect subtle differences **Example:** + ```json { - "imageUrls": [ - "https://example.com/before.jpg", - "https://example.com/after.jpg" - ], + "imageUrls": ["https://example.com/before.jpg", "https://example.com/after.jpg"], "prompt": "What are the main differences between these images?", "model": "gemini-2.5-flash" } ``` ### 3. `extract_text_from_image` - OCR (Text Extraction) + Extracts all visible text from an image. **Use cases:** + - Digitize documents - Read signs and notices - Extract text from screenshots - Process receipts and invoices **Example:** + ```json { "imageUrl": "https://example.com/document.jpg", @@ -68,6 +74,7 @@ Extracts all visible text from an image. ### For End Users Just install the MCP from the Deco marketplace and authorize the usage. You'll be charged per operation: + - **$0.05** per image analysis - **$0.10** per image comparison - **$0.03** per OCR operation @@ -133,21 +140,25 @@ bun run deploy ## 📝 Prompt Examples ### General Analysis + - "Describe this image in detail" - "What objects do you see in this image?" - "What is the context of this photo?" ### Specific Analysis + - "Identify all people in this image" - "What brand is this product?" - "Does this image contain any text?" ### OCR + - "Extract all text from this image" - "Read the content of this document" - "Transcribe the visible text" ### Comparison + - "What are the differences between these images?" - "Do these two photos show the same person?" - "How has the design changed between versions?" @@ -163,6 +174,7 @@ bun run deploy ## 📚 API Documentation For more details about the Gemini Vision API, see: + - [Official Gemini documentation](https://ai.google.dev/gemini-api/docs/vision) - [Vision prompting guide](https://ai.google.dev/gemini-api/docs/vision#prompting-with-images) @@ -178,4 +190,3 @@ This MCP is part of the Deco CMS MCPs monorepo. To contribute: ## 📄 License Maintained by the Deco CMS team. - diff --git a/gemini-pro-vision/package.json b/gemini-pro-vision/package.json index 352583c9..000ef947 100644 --- a/gemini-pro-vision/package.json +++ b/gemini-pro-vision/package.json @@ -1,8 +1,8 @@ { "name": "gemini-pro-vision", "version": "1.0.0", - "description": "MCP server for image analysis using Gemini Pro Vision", "private": true, + "description": "MCP server for image analysis using Gemini Pro Vision", "type": "module", "scripts": { "dev": "deco dev --vite", diff --git a/gemini-pro-vision/tsconfig.json b/gemini-pro-vision/tsconfig.json index c5b23929..392b6275 100644 --- a/gemini-pro-vision/tsconfig.json +++ b/gemini-pro-vision/tsconfig.json @@ -34,9 +34,5 @@ /* Types */ "types": ["@cloudflare/workers-types"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ] + "include": ["server", "shared", "vite.config.ts"] } diff --git a/gemini-pro-vision/wrangler.toml b/gemini-pro-vision/wrangler.toml index f116fd8c..01d95289 100644 --- a/gemini-pro-vision/wrangler.toml +++ b/gemini-pro-vision/wrangler.toml @@ -2,7 +2,7 @@ name = "gemini-pro-vision" main = "server/main.ts" compatibility_date = "2025-06-17" -compatibility_flags = [ "nodejs_compat" ] +compatibility_flags = ["nodejs_compat"] scope = "deco" [deco] diff --git a/github-repo-reports-plugin/README.md b/github-repo-reports-plugin/README.md index 4b0fa8c6..185ea520 100644 --- a/github-repo-reports-plugin/README.md +++ b/github-repo-reports-plugin/README.md @@ -76,20 +76,20 @@ Run `npm audit fix` to apply automatic patches. ### Supported section types -| Type | Description | -|---|---| -| `metrics` | Key-value indicators with optional status, units, and deltas | -| `table` | Tabular data with column headers | -| `markdown` | Free-form GFM content | +| Type | Description | +| ---------- | ------------------------------------------------------------ | +| `metrics` | Key-value indicators with optional status, units, and deltas | +| `table` | Tabular data with column headers | +| `markdown` | Free-form GFM content | ### Status values -| Status | When to use | -|---|---| -| `passing` | Everything within acceptable thresholds | -| `warning` | Metrics degraded or approaching thresholds | +| Status | When to use | +| --------- | ------------------------------------------- | +| `passing` | Everything within acceptable thresholds | +| `warning` | Metrics degraded or approaching thresholds | | `failing` | Critical issues needing immediate attention | -| `info` | Informational, no pass/fail judgment | +| `info` | Informational, no pass/fail judgment | ### Categories diff --git a/github-repo-reports-plugin/skills/generate-report/SKILL.md b/github-repo-reports-plugin/skills/generate-report/SKILL.md index 3de08452..75dd4e2e 100644 --- a/github-repo-reports-plugin/skills/generate-report/SKILL.md +++ b/github-repo-reports-plugin/skills/generate-report/SKILL.md @@ -39,16 +39,16 @@ after the structured sections declared in frontmatter. ## Frontmatter fields -| Field | Required | Type | Description | -|---|---|---|---| -| `title` | **Yes** | `string` | Human-readable report title. | -| `category` | **Yes** | `string` | Category for filtering (e.g., `performance`, `security`, `quality`). See category list below. | -| `status` | **Yes** | `"passing" \| "warning" \| "failing" \| "info"` | Overall health outcome. | -| `summary` | **Yes** | `string` | One-line summary of findings. Keep it concise — this is shown in list views. | -| `updatedAt` | **Yes** | `string` | ISO 8601 timestamp of when the report was generated (`YYYY-MM-DDTHH:mm:ssZ`). | -| `source` | Optional | `string` | Name of the agent or service that generated this report (e.g., `lighthouse`, `security-scanner`). | -| `tags` | Optional | `string[]` | Free-form tags for filtering (e.g., `[homepage, mobile, ci]`). | -| `sections` | Optional | `ReportSection[]` | Structured content sections (metrics, tables, markdown). See section types below. | +| Field | Required | Type | Description | +| ----------- | -------- | ----------------------------------------------- | ------------------------------------------------------------------------------------------------- | +| `title` | **Yes** | `string` | Human-readable report title. | +| `category` | **Yes** | `string` | Category for filtering (e.g., `performance`, `security`, `quality`). See category list below. | +| `status` | **Yes** | `"passing" \| "warning" \| "failing" \| "info"` | Overall health outcome. | +| `summary` | **Yes** | `string` | One-line summary of findings. Keep it concise — this is shown in list views. | +| `updatedAt` | **Yes** | `string` | ISO 8601 timestamp of when the report was generated (`YYYY-MM-DDTHH:mm:ssZ`). | +| `source` | Optional | `string` | Name of the agent or service that generated this report (e.g., `lighthouse`, `security-scanner`). | +| `tags` | Optional | `string[]` | Free-form tags for filtering (e.g., `[homepage, mobile, ci]`). | +| `sections` | Optional | `ReportSection[]` | Structured content sections (metrics, tables, markdown). See section types below. | Always provide `title`, `category`, `status`, `summary`, and `updatedAt`. Do not leave them empty or omit them. @@ -111,13 +111,13 @@ sections: Each metric item: -| Field | Required | Type | Description | -|---|---|---|---| -| `label` | **Yes** | `string` | Metric name (e.g., "LCP", "Coverage"). | -| `value` | **Yes** | `number \| string` | Current value. | -| `unit` | Optional | `string` | Unit of measurement (e.g., "s", "ms", "%", "score"). | -| `previousValue` | Optional | `number \| string` | Previous value for delta comparison. | -| `status` | Optional | `"passing" \| "warning" \| "failing" \| "info"` | Status of this individual metric. | +| Field | Required | Type | Description | +| --------------- | -------- | ----------------------------------------------- | ---------------------------------------------------- | +| `label` | **Yes** | `string` | Metric name (e.g., "LCP", "Coverage"). | +| `value` | **Yes** | `number \| string` | Current value. | +| `unit` | Optional | `string` | Unit of measurement (e.g., "s", "ms", "%", "score"). | +| `previousValue` | Optional | `number \| string` | Previous value for delta comparison. | +| `status` | Optional | `"passing" \| "warning" \| "failing" \| "info"` | Status of this individual metric. | ### Table section @@ -135,24 +135,24 @@ sections: ## Status reference -| Status | When to use | -|---|---| -| `passing` | Everything is within acceptable thresholds. | +| Status | When to use | +| --------- | ---------------------------------------------------- | +| `passing` | Everything is within acceptable thresholds. | | `warning` | Some metrics are degraded or approaching thresholds. | -| `failing` | Critical issues that need immediate attention. | -| `info` | Informational report with no pass/fail judgment. | +| `failing` | Critical issues that need immediate attention. | +| `info` | Informational report with no pass/fail judgment. | ## Category conventions -| Category | Use case | -|---|---| -| `performance` | Web vitals, bundle size, load times. | -| `security` | Vulnerability scans, dependency audits. | -| `accessibility` | WCAG compliance, axe-core results. | -| `seo` | Meta tags, structured data, crawlability. | -| `quality` | Code quality, test coverage, lint results. | -| `uptime` | Health checks, availability monitoring. | -| `compliance` | License audits, policy checks. | +| Category | Use case | +| --------------- | ------------------------------------------ | +| `performance` | Web vitals, bundle size, load times. | +| `security` | Vulnerability scans, dependency audits. | +| `accessibility` | WCAG compliance, axe-core results. | +| `seo` | Meta tags, structured data, crawlability. | +| `quality` | Code quality, test coverage, lint results. | +| `uptime` | Health checks, availability monitoring. | +| `compliance` | License audits, policy checks. | ## Git workflow diff --git a/github-repo-reports/README.md b/github-repo-reports/README.md index 95efcb27..9de7a330 100644 --- a/github-repo-reports/README.md +++ b/github-repo-reports/README.md @@ -11,4 +11,3 @@ GitHub-backed reports MCP server implementing the Reports Binding. Stores and re 5. Test with `bun run dev` See [template-minimal/README.md](../template-minimal/README.md) for detailed instructions. - diff --git a/github-repo-reports/REPORT_FORMAT.md b/github-repo-reports/REPORT_FORMAT.md index 5fd04bee..ae30b53a 100644 --- a/github-repo-reports/REPORT_FORMAT.md +++ b/github-repo-reports/REPORT_FORMAT.md @@ -37,16 +37,16 @@ after the structured sections declared in frontmatter. ## Frontmatter fields -| Field | Required | Type | Description | -|---|---|---|---| -| `title` | **Yes** | `string` | Human-readable report title. | -| `category` | **Yes** | `string` | Category for filtering (e.g., `performance`, `security`, `quality`). See category list below. | -| `status` | **Yes** | `"passing" \| "warning" \| "failing" \| "info"` | Overall health outcome. See status reference below. | -| `summary` | **Yes** | `string` | One-line summary of findings. Keep it concise -- this is shown in list views. | -| `updatedAt` | **Yes** | `string` | ISO 8601 timestamp of when the report was generated (`YYYY-MM-DDTHH:mm:ssZ`). | -| `source` | Optional | `string` | Name of the agent or service that generated this report (e.g., `lighthouse`, `security-scanner`). | -| `tags` | Optional | `string[]` | Free-form tags for filtering (e.g., `[homepage, mobile, ci]`). | -| `sections` | Optional | `ReportSection[]` | Structured content sections (metrics, tables). See section types below. | +| Field | Required | Type | Description | +| ----------- | -------- | ----------------------------------------------- | ------------------------------------------------------------------------------------------------- | +| `title` | **Yes** | `string` | Human-readable report title. | +| `category` | **Yes** | `string` | Category for filtering (e.g., `performance`, `security`, `quality`). See category list below. | +| `status` | **Yes** | `"passing" \| "warning" \| "failing" \| "info"` | Overall health outcome. See status reference below. | +| `summary` | **Yes** | `string` | One-line summary of findings. Keep it concise -- this is shown in list views. | +| `updatedAt` | **Yes** | `string` | ISO 8601 timestamp of when the report was generated (`YYYY-MM-DDTHH:mm:ssZ`). | +| `source` | Optional | `string` | Name of the agent or service that generated this report (e.g., `lighthouse`, `security-scanner`). | +| `tags` | Optional | `string[]` | Free-form tags for filtering (e.g., `[homepage, mobile, ci]`). | +| `sections` | Optional | `ReportSection[]` | Structured content sections (metrics, tables). See section types below. | Always provide `title`, `category`, `status`, `summary`, and `updatedAt`. Do not leave them empty or omit them. @@ -119,13 +119,13 @@ sections: Each metric item: -| Field | Required | Type | Description | -|---|---|---|---| -| `label` | **Yes** | `string` | Metric name (e.g., "LCP", "Coverage"). | -| `value` | **Yes** | `number \| string` | Current value. | -| `unit` | Optional | `string` | Unit of measurement (e.g., "s", "ms", "%", "score"). | -| `previousValue` | Optional | `number \| string` | Previous value for delta comparison. | -| `status` | Optional | `"passing" \| "warning" \| "failing" \| "info"` | Status of this individual metric. | +| Field | Required | Type | Description | +| --------------- | -------- | ----------------------------------------------- | ---------------------------------------------------- | +| `label` | **Yes** | `string` | Metric name (e.g., "LCP", "Coverage"). | +| `value` | **Yes** | `number \| string` | Current value. | +| `unit` | Optional | `string` | Unit of measurement (e.g., "s", "ms", "%", "score"). | +| `previousValue` | Optional | `number \| string` | Previous value for delta comparison. | +| `status` | Optional | `"passing" \| "warning" \| "failing" \| "info"` | Status of this individual metric. | ### Table section @@ -160,9 +160,9 @@ sections: Each criterion item: -| Field | Required | Type | Description | -|---|---|---|---| -| `label` | **Yes** | `string` | Short name of the criterion. | +| Field | Required | Type | Description | +| ------------- | -------- | -------- | ------------------------------------ | +| `label` | **Yes** | `string` | Short name of the criterion. | | `description` | Optional | `string` | Longer explanation of the criterion. | ### Note section @@ -175,9 +175,9 @@ sections: content: "We changed the algorithm to consider grade more heavily this run. Also testing estampa grouping for the first time." ``` -| Field | Required | Type | Description | -|---|---|---|---| -| `content` | **Yes** | `string` | The note text. | +| Field | Required | Type | Description | +| --------- | -------- | -------- | -------------- | +| `content` | **Yes** | `string` | The note text. | ### Ranked list section @@ -212,39 +212,39 @@ sections: Each row: -| Field | Required | Type | Description | -|---|---|---|---| -| `position` | **Yes** | `number` | Current rank position. | -| `delta` | **Yes** | `number` | Change in position (positive = moved up, negative = moved down, 0 = unchanged). | -| `label` | **Yes** | `string` | Item name or title. | -| `image` | **Yes** | `string` | URL of the item image. | -| `values` | **Yes** | `(string \| number)[]` | Attribute values for the item. | -| `note` | Optional | `Record` | Key-value metrics for this item (e.g. sessions, rates). | +| Field | Required | Type | Description | +| ---------- | -------- | ---------------------------------- | ------------------------------------------------------------------------------- | +| `position` | **Yes** | `number` | Current rank position. | +| `delta` | **Yes** | `number` | Change in position (positive = moved up, negative = moved down, 0 = unchanged). | +| `label` | **Yes** | `string` | Item name or title. | +| `image` | **Yes** | `string` | URL of the item image. | +| `values` | **Yes** | `(string \| number)[]` | Attribute values for the item. | +| `note` | Optional | `Record` | Key-value metrics for this item (e.g. sessions, rates). | --- ## Status reference -| Status | When to use | -|---|---| -| `passing` | Everything is within acceptable thresholds. | +| Status | When to use | +| --------- | ---------------------------------------------------- | +| `passing` | Everything is within acceptable thresholds. | | `warning` | Some metrics are degraded or approaching thresholds. | -| `failing` | Critical issues that need immediate attention. | -| `info` | Informational report with no pass/fail judgment. | +| `failing` | Critical issues that need immediate attention. | +| `info` | Informational report with no pass/fail judgment. | ## Category conventions Use these common categories or define your own: -| Category | Use case | -|---|---| -| `performance` | Web vitals, bundle size, load times. | -| `security` | Vulnerability scans, dependency audits. | -| `accessibility` | WCAG compliance, axe-core results. | -| `seo` | Meta tags, structured data, crawlability. | -| `quality` | Code quality, test coverage, lint results. | -| `uptime` | Health checks, availability monitoring. | -| `compliance` | License audits, policy checks. | +| Category | Use case | +| --------------- | ------------------------------------------ | +| `performance` | Web vitals, bundle size, load times. | +| `security` | Vulnerability scans, dependency audits. | +| `accessibility` | WCAG compliance, axe-core results. | +| `seo` | Meta tags, structured data, crawlability. | +| `quality` | Code quality, test coverage, lint results. | +| `uptime` | Health checks, availability monitoring. | +| `compliance` | License audits, policy checks. | --- diff --git a/github-repo-reports/app.json b/github-repo-reports/app.json index b5cf09ed..5099dc84 100644 --- a/github-repo-reports/app.json +++ b/github-repo-reports/app.json @@ -15,17 +15,9 @@ "prefix": "Bearer" }, "metadata": { - "categories": [ - "Productivity", - "Developer Tools" - ], + "categories": ["Productivity", "Developer Tools"], "official": false, - "tags": [ - "github", - "reports", - "markdown", - "mcp" - ], + "tags": ["github", "reports", "markdown", "mcp"], "short_description": "Read and manage reports stored as Markdown files in a GitHub repository.", "mesh_description": "Implements the **Reports Binding** to display an inbox-style UI of reports sourced from a GitHub repository. Reports are stored as **Markdown files with YAML frontmatter** under a configurable directory. **Key Features** — Reads reports directly from GitHub (no server-side persistence). Supports **directory nesting as tags** (e.g., reports/security/audit.md gets tag \"security\"). Full lifecycle management (unread/read/dismissed) persisted in the repo. Supports structured sections: markdown, metrics with KPIs, and tables. **Authentication** — Uses GitHub App OAuth so users can grant access to specific repositories. **Configuration** — Set the target repository, reports directory path, and branch via the Mesh UI." } diff --git a/github-repo-reports/package.json b/github-repo-reports/package.json index 89075ff7..4407371d 100644 --- a/github-repo-reports/package.json +++ b/github-repo-reports/package.json @@ -1,8 +1,8 @@ { "name": "github-repo-reports", "version": "1.0.0", - "description": "GitHub-backed reports MCP server implementing the Reports Binding. Stores and reads reports as Markdown files with YAML frontmatter from a configurable GitHub repository.", "private": true, + "description": "GitHub-backed reports MCP server implementing the Reports Binding. Stores and reads reports as Markdown files with YAML frontmatter from a configurable GitHub repository.", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/github/app.json b/github/app.json index 738d52c8..e5fd9ef9 100644 --- a/github/app.json +++ b/github/app.json @@ -10,10 +10,7 @@ "icon": "https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png", "unlisted": false, "metadata": { - "categories": [ - "Developer Tools", - "Automation" - ], + "categories": ["Developer Tools", "Automation"], "official": false, "tags": [ "github", @@ -29,4 +26,4 @@ "short_description": "OAuth proxy for the official GitHub MCP Server — 30+ tools for repos, issues, PRs, and more", "mesh_description": "OAuth proxy for the official GitHub MCP Server. Authenticates via GitHub App OAuth and exposes 30+ tools (repository management, issue tracking, pull request workflows, code search, branch management, and more). Receives GitHub webhook events and routes them to the correct connection by installation ID. Install the GitHub App, authenticate via OAuth, and get full access to the complete GitHub API toolset." } -} \ No newline at end of file +} diff --git a/github/package.json b/github/package.json index f8c2c5a1..67867d01 100644 --- a/github/package.json +++ b/github/package.json @@ -1,8 +1,8 @@ { "name": "github", "version": "1.0.0", - "description": "GitHub MCP - OAuth proxy for the official GitHub MCP Server", "private": true, + "description": "GitHub MCP - OAuth proxy for the official GitHub MCP Server", "type": "module", "scripts": { "dev": "bun run --env-file=server/.env --hot server/main.ts", diff --git a/github/tsconfig.json b/github/tsconfig.json index a9a75813..72f9b90c 100644 --- a/github/tsconfig.json +++ b/github/tsconfig.json @@ -2,9 +2,7 @@ "compilerOptions": { "target": "ES2022", "useDefineForClassFields": true, - "lib": [ - "ES2023" - ], + "lib": ["ES2023"], "module": "ESNext", "skipLibCheck": true, /* Bundler mode */ @@ -24,17 +22,10 @@ /* Path Aliases */ "baseUrl": ".", "paths": { - "server/*": [ - "./server/*" - ] + "server/*": ["./server/*"] }, /* Types */ - "types": [ - "@types/node" - ] + "types": ["@types/node"] }, - "include": [ - "server" - ] + "include": ["server"] } - diff --git a/google-apps-script/README.md b/google-apps-script/README.md index 295f5091..5b170110 100644 --- a/google-apps-script/README.md +++ b/google-apps-script/README.md @@ -6,46 +6,46 @@ MCP Server para integração com a API do Google Apps Script. Permite gerenciar ### 🗂️ Projects (5 ferramentas) -| Ferramenta | Descrição | -|------------|-----------| -| `create_project` | Cria um novo projeto Apps Script vazio | -| `get_project` | Obtém metadados de um projeto (título, criador, timestamps) | -| `get_project_content` | Obtém o conteúdo do projeto (arquivos e código fonte) | -| `update_project_content` | Atualiza os arquivos do projeto | -| `get_project_metrics` | Obtém métricas de uso (usuários ativos, execuções, falhas) | +| Ferramenta | Descrição | +| ------------------------ | ----------------------------------------------------------- | +| `create_project` | Cria um novo projeto Apps Script vazio | +| `get_project` | Obtém metadados de um projeto (título, criador, timestamps) | +| `get_project_content` | Obtém o conteúdo do projeto (arquivos e código fonte) | +| `update_project_content` | Atualiza os arquivos do projeto | +| `get_project_metrics` | Obtém métricas de uso (usuários ativos, execuções, falhas) | ### ⚡ Scripts - Execução (2 ferramentas) -| Ferramenta | Descrição | -|------------|-----------| -| `run_script` | Executa uma função do script (requer deployment como API executable) | +| Ferramenta | Descrição | +| --------------------- | ------------------------------------------------------------------------- | +| `run_script` | Executa uma função do script (requer deployment como API executable) | | `run_script_dev_mode` | Executa em modo desenvolvimento (usa código mais recente, só para owners) | ### 📦 Versions (3 ferramentas) -| Ferramenta | Descrição | -|------------|-----------| +| Ferramenta | Descrição | +| ---------------- | --------------------------------------------- | | `create_version` | Cria uma nova versão imutável do código atual | -| `get_version` | Obtém detalhes de uma versão específica | -| `list_versions` | Lista todas as versões de um projeto | +| `get_version` | Obtém detalhes de uma versão específica | +| `list_versions` | Lista todas as versões de um projeto | ### 🚀 Deployments (5 ferramentas) -| Ferramenta | Descrição | -|------------|-----------| +| Ferramenta | Descrição | +| ------------------- | ------------------------------------------------------- | | `create_deployment` | Cria um deployment (web app, API executable, ou add-on) | -| `get_deployment` | Obtém detalhes de um deployment específico | -| `list_deployments` | Lista todos os deployments de um projeto | -| `update_deployment` | Atualiza um deployment existente | -| `delete_deployment` | Remove um deployment | +| `get_deployment` | Obtém detalhes de um deployment específico | +| `list_deployments` | Lista todos os deployments de um projeto | +| `update_deployment` | Atualiza um deployment existente | +| `delete_deployment` | Remove um deployment | ### 📊 Processes - Monitoramento (3 ferramentas) -| Ferramenta | Descrição | -|------------|-----------| -| `list_user_processes` | Lista execuções do usuário em todos os scripts | -| `list_script_processes` | Lista execuções de um script específico | -| `get_running_processes` | Obtém processos em execução no momento | +| Ferramenta | Descrição | +| ----------------------- | ---------------------------------------------- | +| `list_user_processes` | Lista execuções do usuário em todos os scripts | +| `list_script_processes` | Lista execuções de um script específico | +| `get_running_processes` | Obtém processos em execução no momento | ## Autenticação @@ -80,17 +80,17 @@ function myFunction() { Logger.log('Hello World!'); return 'Success'; } - ` + `, }, { name: "appsscript", type: "JSON", source: JSON.stringify({ timeZone: "America/Sao_Paulo", - exceptionLogging: "STACKDRIVER" - }) - } - ] + exceptionLogging: "STACKDRIVER", + }), + }, + ], }); ``` @@ -100,14 +100,14 @@ function myFunction() { // 1. Criar versão const version = await create_version({ scriptId: "SCRIPT_ID", - description: "v1.0 - Release inicial" + description: "v1.0 - Release inicial", }); // 2. Criar deployment const deployment = await create_deployment({ scriptId: "SCRIPT_ID", versionNumber: version.versionNumber, - description: "Produção" + description: "Produção", }); ``` @@ -118,7 +118,7 @@ const deployment = await create_deployment({ const result = await run_script({ scriptId: "SCRIPT_ID", functionName: "myFunction", - parameters: ["arg1", "arg2"] + parameters: ["arg1", "arg2"], }); console.log(result.result); @@ -134,39 +134,39 @@ console.log(`${running.runningCount} processos em execução`); // Listar processos de um script específico const processes = await list_script_processes({ scriptId: "SCRIPT_ID", - statuses: ["COMPLETED", "FAILED"] + statuses: ["COMPLETED", "FAILED"], }); ``` ## Tipos de Arquivo -| Tipo | Extensão | Descrição | -|------|----------|-----------| -| `SERVER_JS` | `.gs` | Código Google Apps Script (JavaScript) | -| `HTML` | `.html` | Arquivos HTML para interfaces | -| `JSON` | `.json` | Manifesto do projeto (`appsscript.json`) | +| Tipo | Extensão | Descrição | +| ----------- | -------- | ---------------------------------------- | +| `SERVER_JS` | `.gs` | Código Google Apps Script (JavaScript) | +| `HTML` | `.html` | Arquivos HTML para interfaces | +| `JSON` | `.json` | Manifesto do projeto (`appsscript.json`) | ## Status de Processo -| Status | Descrição | -|--------|-----------| -| `RUNNING` | Em execução | +| Status | Descrição | +| ----------- | --------------------- | +| `RUNNING` | Em execução | | `COMPLETED` | Concluído com sucesso | -| `FAILED` | Falhou com erro | -| `TIMED_OUT` | Excedeu tempo limite | -| `CANCELED` | Cancelado | -| `PAUSED` | Pausado | +| `FAILED` | Falhou com erro | +| `TIMED_OUT` | Excedeu tempo limite | +| `CANCELED` | Cancelado | +| `PAUSED` | Pausado | ## Tipos de Processo -| Tipo | Descrição | -|------|-----------| -| `WEBAPP` | Execução de web app | -| `EXECUTION_API` | Execução via API | -| `TIME_DRIVEN` | Trigger baseado em tempo | -| `TRIGGER` | Trigger de evento | -| `ADD_ON` | Execução de add-on | -| `EDITOR` | Execução do editor | +| Tipo | Descrição | +| --------------- | ------------------------ | +| `WEBAPP` | Execução de web app | +| `EXECUTION_API` | Execução via API | +| `TIME_DRIVEN` | Trigger baseado em tempo | +| `TRIGGER` | Trigger de evento | +| `ADD_ON` | Execução de add-on | +| `EDITOR` | Execução do editor | ## Limitações da API @@ -180,6 +180,3 @@ const processes = await list_script_processes({ - [Google Apps Script API Reference](https://developers.google.com/apps-script/api/reference/rest) - [Quotas e Limites](https://developers.google.com/apps-script/guides/services/quotas) - [Executar Scripts via API](https://developers.google.com/apps-script/api/how-tos/execute) - - - diff --git a/google-apps-script/app.json b/google-apps-script/app.json index 6becf42f..ce1eb64b 100644 --- a/google-apps-script/app.json +++ b/google-apps-script/app.json @@ -17,6 +17,3 @@ "mesh_description": "The Google Apps Script MCP provides comprehensive integration with Google Apps Script API, enabling full programmatic control over script projects. This MCP allows AI agents to create and manage Apps Script projects, read and update script files, execute functions remotely, manage versions and deployments, and monitor script executions. It supports advanced features including deployment configuration, version control, and execution logging. Perfect for automating Google Workspace workflows, building custom integrations, and managing script-based applications. Ideal for users who need to programmatically manage Apps Script projects or execute automation scripts. Provides secure OAuth-based authentication." } } - - - diff --git a/google-apps-script/package.json b/google-apps-script/package.json index f92acec2..0aaccb00 100644 --- a/google-apps-script/package.json +++ b/google-apps-script/package.json @@ -1,8 +1,8 @@ { "name": "google-apps-script", "version": "1.0.0", - "description": "Google Apps Script MCP Server - Manage and execute Apps Script projects", "private": true, + "description": "Google Apps Script MCP Server - Manage and execute Apps Script projects", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/google-apps-script/tsconfig.json b/google-apps-script/tsconfig.json index a4a743d9..6a5ef6d9 100644 --- a/google-apps-script/tsconfig.json +++ b/google-apps-script/tsconfig.json @@ -24,6 +24,3 @@ "include": ["server/**/*.ts", "shared/**/*.ts"], "exclude": ["node_modules", "dist"] } - - - diff --git a/google-big-query/app.json b/google-big-query/app.json index e462d042..f46d1442 100644 --- a/google-big-query/app.json +++ b/google-big-query/app.json @@ -17,4 +17,3 @@ "mesh_description": "The Google BigQuery MCP provides comprehensive access to Google's serverless data warehouse platform, enabling AI agents to execute SQL queries, manage datasets, explore table schemas, monitor query jobs, and analyze large-scale data. This MCP supports running standard SQL and legacy SQL queries, retrieving query results, managing BigQuery datasets and tables, exploring table metadata and column definitions, accessing query job history, monitoring job status, and discovering available projects. It enables advanced data analytics workflows including cross-dataset joins, complex aggregations, time-series analysis, and integration with machine learning models. The integration is designed for data analysts, data scientists, and business intelligence applications that need programmatic access to BigQuery for data exploration, reporting, job monitoring, or automated analytics pipelines. Supports query optimization, cost management through query previews, job status tracking, and multi-project access. Ideal for building data-driven AI applications, automated reporting systems, intelligent data exploration tools, or query monitoring dashboards." } } - diff --git a/google-big-query/package.json b/google-big-query/package.json index 0fa378e1..11902f6b 100644 --- a/google-big-query/package.json +++ b/google-big-query/package.json @@ -1,8 +1,8 @@ { "name": "google-big-query", "version": "1.0.0", - "description": "Google BigQuery MCP Server - Query and manage BigQuery datasets", "private": true, + "description": "Google BigQuery MCP Server - Query and manage BigQuery datasets", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", @@ -25,4 +25,3 @@ "node": ">=22.0.0" } } - diff --git a/google-big-query/tsconfig.json b/google-big-query/tsconfig.json index a7e0e946..b4b02a45 100644 --- a/google-big-query/tsconfig.json +++ b/google-big-query/tsconfig.json @@ -29,8 +29,5 @@ "server/*": ["./server/*"] } }, - "include": [ - "server" - ] + "include": ["server"] } - diff --git a/google-bigquery-official/README.md b/google-bigquery-official/README.md index f4d010f6..6970deb3 100644 --- a/google-bigquery-official/README.md +++ b/google-bigquery-official/README.md @@ -39,5 +39,4 @@ https://bigquery.googleapis.com/mcp --- -*This MCP requires an active Google Cloud account with BigQuery enabled.* - +_This MCP requires an active Google Cloud account with BigQuery enabled._ diff --git a/google-bigquery-official/app.json b/google-bigquery-official/app.json index 08135556..66bb52e8 100644 --- a/google-bigquery-official/app.json +++ b/google-bigquery-official/app.json @@ -13,7 +13,17 @@ "categories": ["Data Analysis"], "official": true, "mesh_unlisted": true, - "tags": ["data-warehouse", "sql", "analytics", "bigdata", "google-cloud", "machine-learning", "bi", "data-science", "petabyte-scale"], + "tags": [ + "data-warehouse", + "sql", + "analytics", + "bigdata", + "google-cloud", + "machine-learning", + "bi", + "data-science", + "petabyte-scale" + ], "short_description": "Analyze massive datasets with Google BigQuery serverless data warehouse", "mesh_description": "Google BigQuery is a fully managed, serverless data warehouse that enables super-fast SQL queries using the processing power of Google's infrastructure. This official MCP provides natural language access to BigQuery's powerful analytics capabilities, allowing you to analyze petabytes of data in seconds. Create and manage datasets, tables, and views with schema definition and partitioning strategies. Run complex SQL queries with support for standard SQL, geographic functions, machine learning functions, and user-defined functions (UDFs). Load data from various sources including Cloud Storage, Cloud SQL, Sheets, and streaming inserts. Execute federated queries that can join BigQuery data with external data sources like Cloud Storage, Bigtable, or Cloud SQL. Use BigQuery ML to create and execute machine learning models directly in SQL for tasks like classification, regression, forecasting, and recommendation. Access real-time analytics with streaming inserts and automatic table updates. Optimize costs with automatic query caching, materialized views, and partitioned tables. Monitor query performance with execution plans, slot usage, and cost estimates. Set up scheduled queries for automated data refreshes, configure access controls with IAM policies, and export results to various formats for further analysis or visualization." } diff --git a/google-calendar-sa/app.json b/google-calendar-sa/app.json index 6faa9093..ce7f2390 100644 --- a/google-calendar-sa/app.json +++ b/google-calendar-sa/app.json @@ -12,7 +12,15 @@ "metadata": { "categories": ["Productivity"], "official": false, - "tags": ["google", "calendar", "scheduling", "events", "meetings", "productivity", "service-account"], + "tags": [ + "google", + "calendar", + "scheduling", + "events", + "meetings", + "productivity", + "service-account" + ], "short_description": "Access Google Calendar using a service account with domain-wide delegation. ", "mesh_description": "The Google Calendar Service Account MCP provides server-to-server access to Google Calendar using a Google Workspace service account with domain-wide delegation. Unlike the OAuth-based MCP, this variant requires no per-user login — it authenticates using a service account JSON key and impersonates a target user to access their calendar. Tokens are generated and refreshed automatically. Ideal for organization-wide calendar visibility, automated scheduling, and internal tools that need to read all meetings for a shared account." } diff --git a/google-calendar-sa/package.json b/google-calendar-sa/package.json index 648c6043..42818bf2 100644 --- a/google-calendar-sa/package.json +++ b/google-calendar-sa/package.json @@ -1,8 +1,8 @@ { "name": "google-calendar-sa", "version": "1.0.0", - "description": "Google Calendar MCP Server with Service Account authentication", "private": true, + "description": "Google Calendar MCP Server with Service Account authentication", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/google-calendar-sa/server/main.ts b/google-calendar-sa/server/main.ts index 7031120e..55dbcdbd 100644 --- a/google-calendar-sa/server/main.ts +++ b/google-calendar-sa/server/main.ts @@ -81,7 +81,10 @@ function mergeResults( const slotArrays = results.map( (r) => ((r as { availableSlots?: { start: string; end: string }[] }) - .availableSlots ?? []) as { start: string; end: string }[], + .availableSlots ?? []) as { + start: string; + end: string; + }[], ); if (slotArrays.length === 0) return { availableSlots: [], totalFound: 0 }; diff --git a/google-calendar-sa/tsconfig.json b/google-calendar-sa/tsconfig.json index 77db41f1..b4b02a45 100644 --- a/google-calendar-sa/tsconfig.json +++ b/google-calendar-sa/tsconfig.json @@ -29,7 +29,5 @@ "server/*": ["./server/*"] } }, - "include": [ - "server" - ] + "include": ["server"] } diff --git a/google-calendar/README.md b/google-calendar/README.md index 2fac7bc0..76abda8e 100644 --- a/google-calendar/README.md +++ b/google-calendar/README.md @@ -5,12 +5,14 @@ MCP Server for Google Calendar integration. Manage calendars, events and check a ## Features ### Calendar Management + - **list_calendars** - List all user's calendars - **get_calendar** - Get details of a specific calendar - **create_calendar** - Create a new secondary calendar - **delete_calendar** - Delete a calendar ### Event Management + - **list_events** - List events with date filters and search - **get_event** - Get details of an event - **create_event** - Create event with attendees and reminders @@ -19,9 +21,11 @@ MCP Server for Google Calendar integration. Manage calendars, events and check a - **quick_add_event** - Create event using natural language ### Availability + - **get_freebusy** - Check busy/free time slots ### Advanced Operations + - **move_event** - Move an event between calendars - **find_available_slots** - Find free time slots across multiple calendars - **duplicate_event** - Create a copy of an existing event @@ -104,10 +108,7 @@ bun run build "dateTime": "2024-01-15T15:00:00-03:00", "timeZone": "America/Sao_Paulo" }, - "attendees": [ - { "email": "john@company.com" }, - { "email": "mary@company.com" } - ], + "attendees": [{ "email": "john@company.com" }, { "email": "mary@company.com" }], "guestsCanSeeOtherGuests": false, "sendUpdates": "all" } diff --git a/google-calendar/app.json b/google-calendar/app.json index 69475c39..d1f46aa4 100644 --- a/google-calendar/app.json +++ b/google-calendar/app.json @@ -12,9 +12,16 @@ "metadata": { "categories": ["Productivity"], "official": false, - "tags": ["google", "calendar", "scheduling", "events", "meetings", "productivity", "time-management"], + "tags": [ + "google", + "calendar", + "scheduling", + "events", + "meetings", + "productivity", + "time-management" + ], "short_description": "Integrate and manage your Google Calendar. Create, edit and delete events, check availability and sync your calendars.", "mesh_description": "The Google Calendar MCP provides comprehensive integration with Google Calendar, enabling full programmatic control over calendar events, schedules, and availability management. This MCP allows AI agents to create, read, update, and delete calendar events, check user availability across multiple calendars, manage event attendees and invitations, set up recurring events, and configure event reminders and notifications. It supports advanced scheduling features including finding optimal meeting times, managing calendar sharing and permissions, handling multiple calendars, and synchronizing events across different time zones. The integration is perfect for building intelligent scheduling assistants, automated meeting coordinators, calendar-based workflow triggers, and personal productivity tools. Ideal for teams and individuals who need to automate calendar management, integrate scheduling into business processes, or build calendar-aware applications. Provides secure OAuth-based authentication for calendar access." } } - diff --git a/google-calendar/package.json b/google-calendar/package.json index 0f3b12ba..30e2fb4c 100644 --- a/google-calendar/package.json +++ b/google-calendar/package.json @@ -1,9 +1,14 @@ { "name": "google-calendar", "version": "1.0.0", - "description": "Google Calendar MCP Server - Manage calendars and events", "private": true, + "description": "Google Calendar MCP Server - Manage calendars and events", "type": "module", + "exports": { + "./tools": "./server/tools/index.ts", + "./types": "./server/lib/types.ts", + "./constants": "./server/constants.ts" + }, "scripts": { "dev": "bun run --hot server/main.ts", "build:server": "NODE_ENV=production bun build server/main.ts --target=bun --outfile=dist/server/main.js", @@ -11,11 +16,6 @@ "publish": "cat app.json | deco registry publish -w /shared/deco -y", "check": "tsc --noEmit" }, - "exports": { - "./tools": "./server/tools/index.ts", - "./types": "./server/lib/types.ts", - "./constants": "./server/constants.ts" - }, "dependencies": { "@decocms/runtime": "^1.2.6", "zod": "^4.0.0" @@ -30,4 +30,3 @@ "node": ">=22.0.0" } } - diff --git a/google-calendar/tsconfig.json b/google-calendar/tsconfig.json index a7e0e946..b4b02a45 100644 --- a/google-calendar/tsconfig.json +++ b/google-calendar/tsconfig.json @@ -29,8 +29,5 @@ "server/*": ["./server/*"] } }, - "include": [ - "server" - ] + "include": ["server"] } - diff --git a/google-docs/README.md b/google-docs/README.md index 719ae138..8976abae 100644 --- a/google-docs/README.md +++ b/google-docs/README.md @@ -1,26 +1,30 @@ -# Google Docs MCP +# Google Docs MCP MCP Server for Google Docs API. Create and edit documents programmatically. ## Features ### Document Management + - **create_document** - Create a new document - **get_document** - Get document content and metadata ### Content Operations + - **insert_text** - Insert text at position - **delete_content** - Delete text range - **replace_text** - Find and replace all - **append_text** - Append to end ### Formatting + - **format_text** - Apply bold, italic, underline, font size - **insert_heading** - Insert heading (H1-H6) - **insert_list** - Create bullet/numbered lists - **remove_list** - Remove list formatting ### Elements + - **insert_table** - Insert tables - **insert_image** - Insert images from URL - **insert_page_break** - Insert page breaks @@ -118,4 +122,3 @@ GOOGLE_CLIENT_SECRET=your_client_secret ## License MIT - diff --git a/google-docs/app.json b/google-docs/app.json index d7e40745..732186b4 100644 --- a/google-docs/app.json +++ b/google-docs/app.json @@ -17,4 +17,3 @@ "mesh_description": "The Google Docs MCP provides comprehensive integration with Google Docs API, enabling full programmatic control over document creation and editing. This MCP allows AI agents to create documents, insert and format text, add headings and lists, insert tables and images, and perform find/replace operations. Perfect for document automation, content generation, and report creation workflows." } } - diff --git a/google-docs/package.json b/google-docs/package.json index c977d732..d87a94ac 100644 --- a/google-docs/package.json +++ b/google-docs/package.json @@ -1,8 +1,8 @@ { "name": "google-docs", "version": "1.0.0", - "description": "Google Docs MCP Server - Create and edit documents", "private": true, + "description": "Google Docs MCP Server - Create and edit documents", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/google-docs/tsconfig.json b/google-docs/tsconfig.json index 216c702a..6a5ef6d9 100644 --- a/google-docs/tsconfig.json +++ b/google-docs/tsconfig.json @@ -24,4 +24,3 @@ "include": ["server/**/*.ts", "shared/**/*.ts"], "exclude": ["node_modules", "dist"] } - diff --git a/google-drive/README.md b/google-drive/README.md index c7371bfa..bce95bab 100644 --- a/google-drive/README.md +++ b/google-drive/README.md @@ -1,10 +1,11 @@ -# Google Drive MCP - +# Google Drive MCP + MCP Server for Google Drive API. Manage files, folders, and permissions. ## Features ### File Operations + - **list_files** - List files with query filtering - **get_file** - Get file metadata - **create_file** - Create new files (empty) @@ -14,10 +15,12 @@ MCP Server for Google Drive API. Manage files, folders, and permissions. - **search_files** - Search with Drive query syntax ### Folder Operations + - **create_folder** - Create new folders - **list_folder_contents** - List folder contents ### Permissions & Sharing + - **list_permissions** - List file permissions - **create_permission** - Share with users/groups - **delete_permission** - Remove sharing @@ -104,16 +107,15 @@ GOOGLE_CLIENT_SECRET=your_client_secret ## Drive Query Syntax -| Query | Description | -|-------|-------------| -| `name contains 'report'` | Files with 'report' in name | -| `mimeType='application/pdf'` | PDF files | -| `'folderId' in parents` | Files in folder | -| `trashed=true` | Trashed files | -| `starred=true` | Starred files | -| `fullText contains 'budget'` | Content search | +| Query | Description | +| ---------------------------- | --------------------------- | +| `name contains 'report'` | Files with 'report' in name | +| `mimeType='application/pdf'` | PDF files | +| `'folderId' in parents` | Files in folder | +| `trashed=true` | Trashed files | +| `starred=true` | Starred files | +| `fullText contains 'budget'` | Content search | ## License MIT - diff --git a/google-drive/TROUBLESHOOTING.md b/google-drive/TROUBLESHOOTING.md index 327b017f..0c4716c1 100644 --- a/google-drive/TROUBLESHOOTING.md +++ b/google-drive/TROUBLESHOOTING.md @@ -3,6 +3,7 @@ ## ❌ Erro: 401 invalid_client **Mensagem completa:** + ``` Acesso bloqueado: erro de autorização The OAuth client was not found. @@ -27,11 +28,13 @@ Erro 401: invalid_client ### 1. Verificar Credenciais Verifique o arquivo `.env`: + ```bash cat .env ``` Deve conter: + ```env GOOGLE_CLIENT_ID=seu-client-id.apps.googleusercontent.com GOOGLE_CLIENT_SECRET=seu-client-secret @@ -57,6 +60,7 @@ GOOGLE_CLIENT_SECRET=seu-client-secret Acesse: https://console.cloud.google.com/apis/library Habilite estas APIs: + - ✅ Google Drive API - ✅ Google Calendar API - ✅ Google Docs API @@ -84,6 +88,7 @@ Habilite estas APIs: ### 5. Verificar Projeto Certifique-se de que está no projeto correto: + ```bash # No .env, o Client ID deve terminar com .apps.googleusercontent.com # E deve corresponder ao projeto que você está vendo no console @@ -102,12 +107,10 @@ Para verificar se as credenciais estão sendo carregadas: ```typescript // No código do servidor -console.log('GOOGLE_CLIENT_ID:', process.env.GOOGLE_CLIENT_ID?.substring(0, 20) + '...'); -console.log('GOOGLE_CLIENT_SECRET:', process.env.GOOGLE_CLIENT_SECRET ? 'configured' : 'missing'); +console.log("GOOGLE_CLIENT_ID:", process.env.GOOGLE_CLIENT_ID?.substring(0, 20) + "..."); +console.log("GOOGLE_CLIENT_SECRET:", process.env.GOOGLE_CLIENT_SECRET ? "configured" : "missing"); ``` ## 📝 Nota Importante Todos os MCPs do Google (Calendar, Drive, Docs, Sheets, etc.) usam as **mesmas credenciais OAuth**. Configure uma vez e use em todos! - - diff --git a/google-drive/app.json b/google-drive/app.json index 357c8614..6129e4cf 100644 --- a/google-drive/app.json +++ b/google-drive/app.json @@ -17,4 +17,3 @@ "mesh_description": "The Google Drive MCP provides comprehensive integration with Google Drive API, enabling full programmatic control over cloud storage. This MCP allows AI agents to list, search, create, update, and delete files and folders, manage permissions and sharing, and organize content. The integration is perfect for building file management systems, automated backup solutions, and document workflows. Provides secure OAuth-based authentication." } } - diff --git a/google-drive/package.json b/google-drive/package.json index 598059c8..a286e85a 100644 --- a/google-drive/package.json +++ b/google-drive/package.json @@ -1,8 +1,8 @@ { "name": "google-drive", "version": "1.0.0", - "description": "Google Drive MCP Server - Manage files, folders and permissions", "private": true, + "description": "Google Drive MCP Server - Manage files, folders and permissions", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/google-drive/tsconfig.json b/google-drive/tsconfig.json index 216c702a..6a5ef6d9 100644 --- a/google-drive/tsconfig.json +++ b/google-drive/tsconfig.json @@ -24,4 +24,3 @@ "include": ["server/**/*.ts", "shared/**/*.ts"], "exclude": ["node_modules", "dist"] } - diff --git a/google-forms/README.md b/google-forms/README.md index b8435cac..cb04a006 100644 --- a/google-forms/README.md +++ b/google-forms/README.md @@ -1,21 +1,24 @@ -# Google Forms MCP +# Google Forms MCP MCP Server for Google Forms API. Create forms, add questions, and collect responses. ## Features ### Form Management + - **create_form** - Create a new form - **get_form** - Get form details and questions - **update_form** - Update title/description - **get_responder_url** - Get the form URL ### Questions + - **add_question** - Add questions (text, paragraph, radio, checkbox, dropdown, scale, date, time) - **update_question** - Update question text/required - **delete_question** - Delete a question ### Responses + - **list_responses** - List all form responses - **get_response** - Get a specific response @@ -106,18 +109,17 @@ GOOGLE_CLIENT_SECRET=your_client_secret ## Question Types -| Type | Description | -|------|-------------| -| `text` | Short text answer | -| `paragraph` | Long text answer | -| `radio` | Single choice | -| `checkbox` | Multiple choice | -| `dropdown` | Dropdown selection | -| `scale` | Linear scale | -| `date` | Date picker | -| `time` | Time picker | +| Type | Description | +| ----------- | ------------------ | +| `text` | Short text answer | +| `paragraph` | Long text answer | +| `radio` | Single choice | +| `checkbox` | Multiple choice | +| `dropdown` | Dropdown selection | +| `scale` | Linear scale | +| `date` | Date picker | +| `time` | Time picker | ## License MIT - diff --git a/google-forms/app.json b/google-forms/app.json index 306243f8..4c1a4a20 100644 --- a/google-forms/app.json +++ b/google-forms/app.json @@ -17,4 +17,3 @@ "mesh_description": "The Google Forms MCP provides integration with Google Forms API, enabling programmatic control over form creation and response collection. This MCP allows AI agents to create forms, add various question types, update form settings, and retrieve responses. Perfect for automating surveys, quizzes, and data collection workflows." } } - diff --git a/google-forms/package.json b/google-forms/package.json index ef93d5f6..44ddcfa8 100644 --- a/google-forms/package.json +++ b/google-forms/package.json @@ -1,8 +1,8 @@ { "name": "google-forms", "version": "1.0.0", - "description": "Google Forms MCP Server - Create forms and collect responses", "private": true, + "description": "Google Forms MCP Server - Create forms and collect responses", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/google-forms/tsconfig.json b/google-forms/tsconfig.json index 216c702a..6a5ef6d9 100644 --- a/google-forms/tsconfig.json +++ b/google-forms/tsconfig.json @@ -24,4 +24,3 @@ "include": ["server/**/*.ts", "shared/**/*.ts"], "exclude": ["node_modules", "dist"] } - diff --git a/google-gemini/README.md b/google-gemini/README.md index 5750c661..a68fa232 100644 --- a/google-gemini/README.md +++ b/google-gemini/README.md @@ -1,4 +1,4 @@ -# google-gemini +# google-gemini Google Gemini App Connection for LLM uses diff --git a/google-gemini/package.json b/google-gemini/package.json index df060ae2..dc463753 100644 --- a/google-gemini/package.json +++ b/google-gemini/package.json @@ -1,8 +1,8 @@ { "name": "google-gemini", "version": "1.0.0", - "description": "Google Gemini App Connection for LLM uses", "private": true, + "description": "Google Gemini App Connection for LLM uses", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/google-gke-official/README.md b/google-gke-official/README.md index 90d436d7..bf38137d 100644 --- a/google-gke-official/README.md +++ b/google-gke-official/README.md @@ -39,5 +39,4 @@ https://container.googleapis.com/mcp --- -*This MCP requires an active Google Cloud account with GKE enabled.* - +_This MCP requires an active Google Cloud account with GKE enabled._ diff --git a/google-gke-official/app.json b/google-gke-official/app.json index 3579fdbe..d220290a 100644 --- a/google-gke-official/app.json +++ b/google-gke-official/app.json @@ -13,7 +13,17 @@ "categories": ["Developer Tools"], "official": true, "mesh_unlisted": true, - "tags": ["kubernetes", "containers", "docker", "orchestration", "google-cloud", "gke", "devops", "microservices", "cloud-native"], + "tags": [ + "kubernetes", + "containers", + "docker", + "orchestration", + "google-cloud", + "gke", + "devops", + "microservices", + "cloud-native" + ], "short_description": "Manage Kubernetes clusters on Google Cloud with GKE", "mesh_description": "Google Kubernetes Engine (GKE) is a managed Kubernetes service that simplifies deploying, managing, and scaling containerized applications using Google's infrastructure. This official MCP enables you to create and manage GKE clusters with autopilot or standard modes, configure node pools with custom machine types, and implement cluster autoscaling based on workload demands. Deploy applications using Kubernetes manifests, Helm charts, or Kustomize configurations. Manage pods, deployments, services, ingress, config maps, and secrets through intuitive natural language commands. Configure networking with VPC-native clusters, private clusters, and workload identity for secure service authentication. Set up horizontal pod autoscaling, vertical pod autoscaling, and cluster autoscaler for optimal resource utilization. Implement CI/CD pipelines with GKE integration, rolling updates with zero downtime, and canary deployments for gradual rollouts. Monitor cluster health, resource usage, and application performance with Cloud Monitoring and Logging integration. Configure security policies with Pod Security Standards, Binary Authorization, and network policies. Manage multi-cluster deployments with GKE Enterprise, implement service mesh with Anthos Service Mesh, and use Config Sync for GitOps workflows. Perfect for teams running cloud-native applications at scale." } diff --git a/google-gmail/README.md b/google-gmail/README.md index 6aedf8fb..aac14515 100644 --- a/google-gmail/README.md +++ b/google-gmail/README.md @@ -1,10 +1,11 @@ -# Gmail MCP +# Gmail MCP MCP Server for Gmail API integration. Read, send, search, and manage emails using the Gmail API with OAuth authentication. ## Features ### Message Management + - **list_messages** - List messages with filters and pagination - **get_message** - Get full message details (headers, body, attachments) - **send_message** - Send new emails with HTML support @@ -15,6 +16,7 @@ MCP Server for Gmail API integration. Read, send, search, and manage emails usin - **modify_message** - Add/remove labels (mark as read, star, etc.) ### Thread Management + - **list_threads** - List email conversations - **get_thread** - Get entire conversation with all messages - **trash_thread** - Move entire conversation to trash @@ -23,6 +25,7 @@ MCP Server for Gmail API integration. Read, send, search, and manage emails usin - **delete_thread** - Permanently delete entire conversation ### Label Management + - **list_labels** - List all labels (system + custom) - **get_label** - Get label details and counts - **create_label** - Create custom label with colors @@ -30,6 +33,7 @@ MCP Server for Gmail API integration. Read, send, search, and manage emails usin - **delete_label** - Delete custom label ### Draft Management + - **list_drafts** - List saved drafts - **get_draft** - Get draft content - **create_draft** - Create new draft @@ -212,25 +216,25 @@ Then send it: The `q` parameter in search supports Gmail's powerful query syntax: -| Query | Description | -|-------|-------------| -| `is:unread` | Unread messages | -| `is:read` | Read messages | -| `is:starred` | Starred messages | -| `is:important` | Important messages | -| `from:user@email.com` | From specific sender | -| `to:user@email.com` | To specific recipient | -| `subject:keyword` | Subject contains keyword | -| `has:attachment` | Has attachments | -| `filename:pdf` | Has PDF attachments | -| `after:2024/01/01` | After date | -| `before:2024/12/31` | Before date | -| `older_than:7d` | Older than 7 days | -| `newer_than:2d` | Newer than 2 days | -| `label:work` | Has label 'work' | -| `in:inbox` | In inbox | -| `in:sent` | In sent folder | -| `in:trash` | In trash | +| Query | Description | +| --------------------- | ------------------------ | +| `is:unread` | Unread messages | +| `is:read` | Read messages | +| `is:starred` | Starred messages | +| `is:important` | Important messages | +| `from:user@email.com` | From specific sender | +| `to:user@email.com` | To specific recipient | +| `subject:keyword` | Subject contains keyword | +| `has:attachment` | Has attachments | +| `filename:pdf` | Has PDF attachments | +| `after:2024/01/01` | After date | +| `before:2024/12/31` | Before date | +| `older_than:7d` | Older than 7 days | +| `newer_than:2d` | Newer than 2 days | +| `label:work` | Has label 'work' | +| `in:inbox` | In inbox | +| `in:sent` | In sent folder | +| `in:trash` | In trash | Combine queries: `from:john subject:meeting is:unread` @@ -272,22 +276,21 @@ This MCP requests the following scopes: Common system labels you can use: -| Label | Description | -|-------|-------------| -| `INBOX` | Inbox | -| `SENT` | Sent messages | -| `DRAFT` | Drafts | -| `SPAM` | Spam | -| `TRASH` | Trash | -| `UNREAD` | Unread messages | -| `STARRED` | Starred | -| `IMPORTANT` | Important | -| `CATEGORY_PERSONAL` | Personal category | -| `CATEGORY_SOCIAL` | Social category | +| Label | Description | +| --------------------- | ------------------- | +| `INBOX` | Inbox | +| `SENT` | Sent messages | +| `DRAFT` | Drafts | +| `SPAM` | Spam | +| `TRASH` | Trash | +| `UNREAD` | Unread messages | +| `STARRED` | Starred | +| `IMPORTANT` | Important | +| `CATEGORY_PERSONAL` | Personal category | +| `CATEGORY_SOCIAL` | Social category | | `CATEGORY_PROMOTIONS` | Promotions category | -| `CATEGORY_UPDATES` | Updates category | +| `CATEGORY_UPDATES` | Updates category | ## License MIT - diff --git a/google-gmail/app.json b/google-gmail/app.json index 23b1aeba..f371983a 100644 --- a/google-gmail/app.json +++ b/google-gmail/app.json @@ -17,4 +17,3 @@ "mesh_description": "The Gmail MCP provides comprehensive integration with Gmail API, enabling full programmatic control over email management. This MCP allows AI agents to read messages and threads, send new emails, search with Gmail query syntax, manage labels, create and send drafts, and organize your inbox. It supports advanced features including thread-based conversations, label management, attachment handling, and powerful search capabilities. The integration is perfect for building intelligent email assistants, automated responders, email-based workflow triggers, and productivity tools. Ideal for users who need to automate email management, integrate messaging into business processes, or build email-aware applications. Provides secure OAuth-based authentication for Gmail access." } } - diff --git a/google-gmail/package.json b/google-gmail/package.json index c105bf78..22b63613 100644 --- a/google-gmail/package.json +++ b/google-gmail/package.json @@ -1,8 +1,8 @@ { "name": "google-gmail", "version": "1.0.0", - "description": "Google Gmail MCP Server - Read, send and manage emails", "private": true, + "description": "Google Gmail MCP Server - Read, send and manage emails", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/google-gmail/tsconfig.json b/google-gmail/tsconfig.json index 70f5415a..6a5ef6d9 100644 --- a/google-gmail/tsconfig.json +++ b/google-gmail/tsconfig.json @@ -21,13 +21,6 @@ }, "types": ["bun-types"] }, - "include": [ - "server/**/*.ts", - "shared/**/*.ts" - ], - "exclude": [ - "node_modules", - "dist" - ] + "include": ["server/**/*.ts", "shared/**/*.ts"], + "exclude": ["node_modules", "dist"] } - diff --git a/google-maps-official/README.md b/google-maps-official/README.md index c2dd7990..57c5447b 100644 --- a/google-maps-official/README.md +++ b/google-maps-official/README.md @@ -39,5 +39,4 @@ https://mapstools.googleapis.com/mcp --- -*This MCP requires an active Google Maps Platform API key.* - +_This MCP requires an active Google Maps Platform API key._ diff --git a/google-maps-official/app.json b/google-maps-official/app.json index d14e3475..c7b0d215 100644 --- a/google-maps-official/app.json +++ b/google-maps-official/app.json @@ -13,7 +13,17 @@ "categories": ["Mapping"], "official": true, "mesh_unlisted": true, - "tags": ["maps", "geocoding", "directions", "places", "location", "google-maps", "geospatial", "navigation", "gis"], + "tags": [ + "maps", + "geocoding", + "directions", + "places", + "location", + "google-maps", + "geospatial", + "navigation", + "gis" + ], "short_description": "Access Google Maps Platform APIs through natural language", "mesh_description": "Google Maps Platform provides comprehensive mapping and location services used by millions of applications worldwide. This official MCP gives you natural language access to Google Maps APIs including geocoding for converting addresses to coordinates and reverse geocoding for coordinates to addresses. Get optimal route directions with multiple transportation modes (driving, walking, bicycling, transit) considering real-time traffic conditions, toll roads, and route alternatives. Search and discover places with detailed information including business hours, ratings, reviews, photos, and contact details. Calculate accurate distances and travel times between multiple locations using the Distance Matrix API. Optimize multi-stop routes with the Directions API for efficient delivery and logistics planning. Access detailed place information with the Places API including nearby search, text search, and place details. Use the Geolocation API to determine device location based on cell towers and WiFi access points. Implement time zone services to convert coordinates to time zones and calculate local times. Create custom maps with markers, polylines, polygons, and info windows. Access Street View imagery programmatically for location verification and virtual tours. Utilize Elevation API for topographic data and terrain information. Perfect for location-based applications, delivery services, real estate platforms, and travel apps." } diff --git a/google-meet/README.md b/google-meet/README.md index fcb68f07..795525dc 100644 --- a/google-meet/README.md +++ b/google-meet/README.md @@ -1,22 +1,25 @@ # Google Meet MCP - + MCP Server for Google Meet API. Create and manage video meetings. ## Features ### Meeting Spaces + - **create_meeting** - Create a new meeting space - **get_meeting** - Get meeting details - **update_meeting** - Update meeting settings - **end_meeting** - End active conference ### Conference Records + - **list_conference_records** - List past meetings - **get_conference_record** - Get conference details - **list_participants** - List meeting participants - **get_participant_sessions** - Get participant join/leave times ### Recordings & Transcripts + - **list_recordings** - List meeting recordings - **get_recording** - Get recording details - **list_transcripts** - List meeting transcripts @@ -76,13 +79,12 @@ Returns a meeting link like `https://meet.google.com/abc-defg-hij` ## Access Types -| Type | Description | -|------|-------------| -| `OPEN` | Anyone with the link can join | -| `TRUSTED` | Only users in your organization | -| `RESTRICTED` | Only invited users | +| Type | Description | +| ------------ | ------------------------------- | +| `OPEN` | Anyone with the link can join | +| `TRUSTED` | Only users in your organization | +| `RESTRICTED` | Only invited users | ## License MIT - diff --git a/google-meet/app.json b/google-meet/app.json index 6a187f8e..1c7b636b 100644 --- a/google-meet/app.json +++ b/google-meet/app.json @@ -17,4 +17,3 @@ "mesh_description": "The Google Meet MCP provides integration with Google Meet API, enabling programmatic control over video meetings. This MCP allows AI agents to create meeting spaces, retrieve meeting details, list participants, and access conference recordings. Perfect for automating meeting workflows and building meeting management applications." } } - diff --git a/google-meet/package.json b/google-meet/package.json index f810a2ca..bf72e0b9 100644 --- a/google-meet/package.json +++ b/google-meet/package.json @@ -1,8 +1,8 @@ { "name": "google-meet", "version": "1.0.0", - "description": "Google Meet MCP Server - Create and manage meetings", "private": true, + "description": "Google Meet MCP Server - Create and manage meetings", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/google-meet/tsconfig.json b/google-meet/tsconfig.json index 216c702a..6a5ef6d9 100644 --- a/google-meet/tsconfig.json +++ b/google-meet/tsconfig.json @@ -24,4 +24,3 @@ "include": ["server/**/*.ts", "shared/**/*.ts"], "exclude": ["node_modules", "dist"] } - diff --git a/google-search-console/README.md b/google-search-console/README.md index 635c7644..6c33832d 100644 --- a/google-search-console/README.md +++ b/google-search-console/README.md @@ -1,25 +1,29 @@ -# Google Search Console MCP +# Google Search Console MCP MCP Server for Google Search Console integration. Access search analytics, manage sitemaps, inspect URLs, and monitor site performance using the Google Search Console API. ## Features ### Search Analytics + - **query_search_analytics** - Query search analytics data (clicks, impressions, CTR, position) with filters by date, query, page, country, device, and search type ### Sites Management + - **list_sites** - List all sites in Google Search Console - **get_site** - Get information about a specific site - **add_site** - Add a new site to Google Search Console - **remove_site** - Remove a site from Google Search Console ### Sitemaps Management + - **list_sitemaps** - List all sitemaps for a site - **get_sitemap** - Get information about a specific sitemap - **submit_sitemap** - Submit a sitemap to Google Search Console - **delete_sitemap** - Delete a sitemap from Google Search Console ### URL Inspection + - **inspect_url** - Inspect a URL's Google index status, including indexing state, mobile usability, AMP status, and rich results ## Setup diff --git a/google-search-console/app.json b/google-search-console/app.json index 67cb149d..61a0112f 100644 --- a/google-search-console/app.json +++ b/google-search-console/app.json @@ -12,7 +12,16 @@ "metadata": { "categories": ["SEO", "Analytics"], "official": false, - "tags": ["google", "search-console", "seo", "analytics", "sitemaps", "search", "webmaster", "indexing"], + "tags": [ + "google", + "search-console", + "seo", + "analytics", + "sitemaps", + "search", + "webmaster", + "indexing" + ], "short_description": "Integrate with Google Search Console to access search analytics, manage sitemaps, inspect URLs, and monitor site performance.", "mesh_description": "The Google Search Console MCP provides comprehensive integration with Google Search Console, enabling programmatic access to search performance data, sitemap management, and URL inspection capabilities. **Key Features** - Query search analytics data (clicks, impressions, CTR, position) with filters by date, query, page, country, device, and search type. Manage sitemaps by listing, submitting, and deleting sitemaps for your sites. Inspect URL index status including indexing state, mobile usability, AMP status, and rich results. Manage sites by adding, removing, and listing sites in your Search Console account. **Use Cases** - Perfect for SEO professionals who need to automate search performance reporting, monitor keyword rankings, track click-through rates, and analyze search traffic patterns. Ideal for developers building SEO dashboards, automated sitemap submission workflows, URL indexing monitoring systems, and search performance analytics tools. Great for content teams tracking article performance, identifying top-performing pages, and optimizing content based on search data. **Authentication** - Uses OAuth 2.0 with Google Search Console API scopes. Provides secure access to your Search Console data through Google's standard authentication flow." } diff --git a/google-search-console/package.json b/google-search-console/package.json index d9d18457..68673885 100644 --- a/google-search-console/package.json +++ b/google-search-console/package.json @@ -1,8 +1,8 @@ { "name": "google-search-console", "version": "1.0.0", - "description": "Google Search Console MCP - Search analytics, sitemaps, sites, and URL inspection", "private": true, + "description": "Google Search Console MCP - Search analytics, sitemaps, sites, and URL inspection", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/google-search-console/tsconfig.json b/google-search-console/tsconfig.json index e40c9e8a..6a5ef6d9 100644 --- a/google-search-console/tsconfig.json +++ b/google-search-console/tsconfig.json @@ -21,12 +21,6 @@ }, "types": ["bun-types"] }, - "include": [ - "server/**/*.ts", - "shared/**/*.ts" - ], - "exclude": [ - "node_modules", - "dist" - ] + "include": ["server/**/*.ts", "shared/**/*.ts"], + "exclude": ["node_modules", "dist"] } diff --git a/google-sheets/README.md b/google-sheets/README.md index e7b3ec5b..14eace40 100644 --- a/google-sheets/README.md +++ b/google-sheets/README.md @@ -1,4 +1,4 @@ -# Google Sheets MCP +# Google Sheets MCP MCP Server for Google Sheets API. Full-featured integration for reading, writing, formatting, and managing spreadsheet data programmatically. @@ -6,118 +6,118 @@ MCP Server for Google Sheets API. Full-featured integration for reading, writing This MCP provides **40+ tools** covering almost all Google Sheets API capabilities: -| Category | Tools | -|----------|-------| -| Spreadsheet Management | 8 | -| Value Operations | 7 | -| Formatting & Styling | 10 | -| Dimension Operations | 10 | -| Charts & Visualization | 2 | -| Data Validation | 2 | -| Conditional Formatting | 2 | -| Protection | 2 | -| Filters | 5 | -| Analysis (Pivot, Named Ranges) | 3 | +| Category | Tools | +| ------------------------------ | ----- | +| Spreadsheet Management | 8 | +| Value Operations | 7 | +| Formatting & Styling | 10 | +| Dimension Operations | 10 | +| Charts & Visualization | 2 | +| Data Validation | 2 | +| Conditional Formatting | 2 | +| Protection | 2 | +| Filters | 5 | +| Analysis (Pivot, Named Ranges) | 3 | ## Spreadsheet Management -| Tool | Description | -|------|-------------| -| `create_spreadsheet` | Create a new spreadsheet | -| `get_spreadsheet` | Get spreadsheet metadata and sheet list | -| `add_sheet` | Add a new sheet/tab | -| `delete_sheet` | Delete a sheet | -| `rename_sheet` | Rename a sheet | -| `duplicate_sheet` | Copy an existing sheet | -| `freeze_rows` | Freeze rows at top (keep headers visible) | -| `freeze_columns` | Freeze columns at left | +| Tool | Description | +| -------------------- | ----------------------------------------- | +| `create_spreadsheet` | Create a new spreadsheet | +| `get_spreadsheet` | Get spreadsheet metadata and sheet list | +| `add_sheet` | Add a new sheet/tab | +| `delete_sheet` | Delete a sheet | +| `rename_sheet` | Rename a sheet | +| `duplicate_sheet` | Copy an existing sheet | +| `freeze_rows` | Freeze rows at top (keep headers visible) | +| `freeze_columns` | Freeze columns at left | ## Value Operations -| Tool | Description | -|------|-------------| -| `read_range` | Read values from a range | -| `write_range` | Write values to a range | -| `append_rows` | Append rows to a table | -| `clear_range` | Clear values from a range | -| `batch_read` | Read multiple ranges at once | -| `batch_write` | Write to multiple ranges at once | +| Tool | Description | +| --------------- | ------------------------------------- | +| `read_range` | Read values from a range | +| `write_range` | Write values to a range | +| `append_rows` | Append rows to a table | +| `clear_range` | Clear values from a range | +| `batch_read` | Read multiple ranges at once | +| `batch_write` | Write to multiple ranges at once | | `read_formulas` | Read formulas (not calculated values) | ## Formatting & Styling -| Tool | Description | -|------|-------------| -| `format_cells` | Apply text formatting (bold, colors, font size) | -| `auto_resize_columns` | Auto-fit column widths | -| `sort_range` | Sort data by column | -| `find_replace` | Find and replace text | -| `merge_cells` | Merge multiple cells | -| `unmerge_cells` | Unmerge cells | -| `set_borders` | Add borders to cells | -| `add_banding` | Add alternating row colors | -| `set_number_format` | Format numbers (currency, %, date) | -| `add_note` | Add a note/comment to a cell | +| Tool | Description | +| --------------------- | ----------------------------------------------- | +| `format_cells` | Apply text formatting (bold, colors, font size) | +| `auto_resize_columns` | Auto-fit column widths | +| `sort_range` | Sort data by column | +| `find_replace` | Find and replace text | +| `merge_cells` | Merge multiple cells | +| `unmerge_cells` | Unmerge cells | +| `set_borders` | Add borders to cells | +| `add_banding` | Add alternating row colors | +| `set_number_format` | Format numbers (currency, %, date) | +| `add_note` | Add a note/comment to a cell | ## Dimension Operations (Rows & Columns) -| Tool | Description | -|------|-------------| -| `insert_rows` | Insert rows at position | -| `insert_columns` | Insert columns at position | -| `delete_rows` | Delete rows | -| `delete_columns` | Delete columns | -| `move_rows` | Move rows to new position | -| `move_columns` | Move columns to new position | -| `hide_rows` | Hide/show rows | -| `hide_columns` | Hide/show columns | -| `resize_rows` | Set row height | -| `resize_columns` | Set column width | +| Tool | Description | +| ---------------- | ---------------------------- | +| `insert_rows` | Insert rows at position | +| `insert_columns` | Insert columns at position | +| `delete_rows` | Delete rows | +| `delete_columns` | Delete columns | +| `move_rows` | Move rows to new position | +| `move_columns` | Move columns to new position | +| `hide_rows` | Hide/show rows | +| `hide_columns` | Hide/show columns | +| `resize_rows` | Set row height | +| `resize_columns` | Set column width | ## Charts & Visualization -| Tool | Description | -|------|-------------| +| Tool | Description | +| -------------- | ------------------------------------------- | | `create_chart` | Create chart (bar, line, pie, column, area) | -| `delete_chart` | Delete a chart | +| `delete_chart` | Delete a chart | ## Data Validation -| Tool | Description | -|------|-------------| -| `add_data_validation` | Add dropdowns, checkboxes, or constraints | -| `clear_data_validation` | Remove validation rules | +| Tool | Description | +| ----------------------- | ----------------------------------------- | +| `add_data_validation` | Add dropdowns, checkboxes, or constraints | +| `clear_data_validation` | Remove validation rules | ## Conditional Formatting -| Tool | Description | -|------|-------------| -| `add_conditional_formatting` | Add auto-formatting rules | -| `clear_conditional_formatting` | Remove formatting rules | +| Tool | Description | +| ------------------------------ | ------------------------- | +| `add_conditional_formatting` | Add auto-formatting rules | +| `clear_conditional_formatting` | Remove formatting rules | ## Protection -| Tool | Description | -|------|-------------| -| `protect_range` | Protect cells from editing | -| `unprotect_range` | Remove protection | +| Tool | Description | +| ----------------- | -------------------------- | +| `protect_range` | Protect cells from editing | +| `unprotect_range` | Remove protection | ## Filters -| Tool | Description | -|------|-------------| -| `set_basic_filter` | Add filter dropdowns to columns | -| `clear_basic_filter` | Remove filters | -| `create_filter_view` | Create saved filter view | -| `delete_filter_view` | Delete filter view | -| `add_slicer` | Add visual filter control | +| Tool | Description | +| -------------------- | ------------------------------- | +| `set_basic_filter` | Add filter dropdowns to columns | +| `clear_basic_filter` | Remove filters | +| `create_filter_view` | Create saved filter view | +| `delete_filter_view` | Delete filter view | +| `add_slicer` | Add visual filter control | ## Analysis -| Tool | Description | -|------|-------------| -| `create_named_range` | Create named range for formulas | -| `delete_named_range` | Delete named range | +| Tool | Description | +| -------------------- | ------------------------------------ | +| `create_named_range` | Create named range for formulas | +| `delete_named_range` | Delete named range | | `create_pivot_table` | Create pivot table for data analysis | ## Setup @@ -309,18 +309,19 @@ GOOGLE_CLIENT_SECRET=your_client_secret ## A1 Notation Reference -| Notation | Description | -|----------|-------------| -| `A1` | Single cell A1 | -| `A1:B2` | Range from A1 to B2 | -| `A:A` | Entire column A | -| `1:1` | Entire row 1 | -| `A1:A` | Column A starting from row 1 | -| `Sheet1!A1:B2` | Range in specific sheet | +| Notation | Description | +| -------------- | ---------------------------- | +| `A1` | Single cell A1 | +| `A1:B2` | Range from A1 to B2 | +| `A:A` | Entire column A | +| `1:1` | Entire row 1 | +| `A1:A` | Column A starting from row 1 | +| `Sheet1!A1:B2` | Range in specific sheet | ## Index Reference All row/column indexes are **0-based**: + - Row 1 = index 0 - Column A = index 0 - Column B = index 1 @@ -329,6 +330,7 @@ All row/column indexes are **0-based**: ## Color Format Colors use RGB values from 0 to 1: + ```json { "red": 0.2, @@ -338,6 +340,7 @@ Colors use RGB values from 0 to 1: ``` Common colors: + - White: `{ "red": 1, "green": 1, "blue": 1 }` - Black: `{ "red": 0, "green": 0, "blue": 0 }` - Red: `{ "red": 1, "green": 0, "blue": 0 }` diff --git a/google-sheets/app.json b/google-sheets/app.json index f47b857f..9598cac0 100644 --- a/google-sheets/app.json +++ b/google-sheets/app.json @@ -17,4 +17,3 @@ "mesh_description": "The Google Sheets MCP provides comprehensive integration with Google Sheets API, enabling full programmatic control over spreadsheet data. This MCP allows AI agents to create spreadsheets, read and write cell values, append rows, manage sheets within workbooks, format cells, sort data, and perform find/replace operations. It supports advanced features including range-based operations, batch updates, and conditional formatting. The integration is perfect for building data processing pipelines, automated reporting systems, and data-driven applications. Ideal for users who need to automate spreadsheet workflows, integrate data management into business processes, or build spreadsheet-aware applications. Provides secure OAuth-based authentication for Sheets access." } } - diff --git a/google-sheets/package.json b/google-sheets/package.json index 18db122e..51cb4443 100644 --- a/google-sheets/package.json +++ b/google-sheets/package.json @@ -1,8 +1,8 @@ { "name": "google-sheets", "version": "1.0.0", - "description": "Google Sheets MCP Server - Read, write and manage spreadsheets", "private": true, + "description": "Google Sheets MCP Server - Read, write and manage spreadsheets", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/google-sheets/tsconfig.json b/google-sheets/tsconfig.json index 216c702a..6a5ef6d9 100644 --- a/google-sheets/tsconfig.json +++ b/google-sheets/tsconfig.json @@ -24,4 +24,3 @@ "include": ["server/**/*.ts", "shared/**/*.ts"], "exclude": ["node_modules", "dist"] } - diff --git a/google-slides/README.md b/google-slides/README.md index 058c9eb5..33ab9629 100644 --- a/google-slides/README.md +++ b/google-slides/README.md @@ -1,20 +1,23 @@ -# Google Slides MCP +# Google Slides MCP MCP Server for Google Slides API. Create and edit presentations programmatically. ## Features ### Presentation Management + - **create_presentation** - Create a new presentation - **get_presentation** - Get presentation details and slides ### Slide Operations + - **add_slide** - Add slides with different layouts - **delete_slide** - Delete a slide - **duplicate_slide** - Copy an existing slide - **move_slide** - Reorder slides ### Elements + - **insert_text** - Add text boxes - **insert_image** - Add images from URL - **insert_shape** - Add shapes (rectangle, ellipse, arrow, etc.) @@ -98,16 +101,16 @@ GOOGLE_CLIENT_SECRET=your_client_secret ## Slide Layouts -| Layout | Description | -|--------|-------------| -| `BLANK` | Empty slide | -| `TITLE` | Title slide | -| `TITLE_AND_BODY` | Title with content area | -| `TITLE_AND_TWO_COLUMNS` | Two-column layout | -| `SECTION_HEADER` | Section divider | -| `TITLE_ONLY` | Title without body | -| `CAPTION_ONLY` | Caption slide | -| `BIG_NUMBER` | Large number display | +| Layout | Description | +| ----------------------- | ----------------------- | +| `BLANK` | Empty slide | +| `TITLE` | Title slide | +| `TITLE_AND_BODY` | Title with content area | +| `TITLE_AND_TWO_COLUMNS` | Two-column layout | +| `SECTION_HEADER` | Section divider | +| `TITLE_ONLY` | Title without body | +| `CAPTION_ONLY` | Caption slide | +| `BIG_NUMBER` | Large number display | ## Shape Types @@ -122,4 +125,3 @@ Standard slide size: **720 x 540 points** (10" x 7.5") ## License MIT - diff --git a/google-slides/app.json b/google-slides/app.json index 3a5b8fe8..ea94d8b2 100644 --- a/google-slides/app.json +++ b/google-slides/app.json @@ -17,4 +17,3 @@ "mesh_description": "The Google Slides MCP provides integration with Google Slides API, enabling programmatic control over presentations. This MCP allows AI agents to create presentations, add and manage slides, insert text boxes, images, shapes, and tables. Perfect for automating presentation creation and building slide deck generators." } } - diff --git a/google-slides/package.json b/google-slides/package.json index b06cf5e1..f961e857 100644 --- a/google-slides/package.json +++ b/google-slides/package.json @@ -1,8 +1,8 @@ { "name": "google-slides", "version": "1.0.0", - "description": "Google Slides MCP Server - Create and edit presentations", "private": true, + "description": "Google Slides MCP Server - Create and edit presentations", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/google-slides/tsconfig.json b/google-slides/tsconfig.json index 216c702a..6a5ef6d9 100644 --- a/google-slides/tsconfig.json +++ b/google-slides/tsconfig.json @@ -24,4 +24,3 @@ "include": ["server/**/*.ts", "shared/**/*.ts"], "exclude": ["node_modules", "dist"] } - diff --git a/google-tag-manager/README.md b/google-tag-manager/README.md index d5446acb..1e0b1d36 100644 --- a/google-tag-manager/README.md +++ b/google-tag-manager/README.md @@ -5,21 +5,25 @@ MCP Server for Google Tag Manager integration. Manage accounts, containers, work ## Features ### Account Management + - **list_accounts** - List all accessible GTM accounts - **get_account** - Get details of a specific account ### Container Management + - **list_containers** - List containers in an account - **get_container** - Get details of a specific container - **create_container** - Create a new container - **delete_container** - Delete a container ### Workspace Management + - **list_workspaces** - List workspaces in a container - **get_workspace** - Get details of a specific workspace - **create_workspace** - Create a new workspace ### Tag Management + - **list_tags** - List tags in a workspace - **get_tag** - Get details of a specific tag - **create_tag** - Create a new tag @@ -27,6 +31,7 @@ MCP Server for Google Tag Manager integration. Manage accounts, containers, work - **delete_tag** - Delete a tag ### Trigger Management + - **list_triggers** - List triggers in a workspace - **get_trigger** - Get details of a specific trigger - **create_trigger** - Create a new trigger @@ -34,6 +39,7 @@ MCP Server for Google Tag Manager integration. Manage accounts, containers, work - **delete_trigger** - Delete a trigger ### Variable Management + - **list_variables** - List variables in a workspace - **get_variable** - Get details of a specific variable - **create_variable** - Create a new variable @@ -457,4 +463,3 @@ The MCP will throw errors for: ## License MIT - diff --git a/google-tag-manager/app.json b/google-tag-manager/app.json index e9012d65..375920b2 100644 --- a/google-tag-manager/app.json +++ b/google-tag-manager/app.json @@ -1,19 +1,19 @@ { - "scopeName": "deco", - "name": "google-tag-manager", - "friendlyName": "Google Tag Manager", - "connection": { - "type": "HTTP", - "url": "https://sites-google-tag-manager.decocache.com/mcp" - }, - "description": "Manage Google Tag Manager resources including accounts, containers, workspaces, tags, triggers and variables via API.", - "icon": "https://assets.decocache.com/decocms/9636f785-2d20-4b90-8fd8-6bd8f0b29afb/google-tag-manager.jpg", - "unlisted": false, + "scopeName": "deco", + "name": "google-tag-manager", + "friendlyName": "Google Tag Manager", + "connection": { + "type": "HTTP", + "url": "https://sites-google-tag-manager.decocache.com/mcp" + }, + "description": "Manage Google Tag Manager resources including accounts, containers, workspaces, tags, triggers and variables via API.", + "icon": "https://assets.decocache.com/decocms/9636f785-2d20-4b90-8fd8-6bd8f0b29afb/google-tag-manager.jpg", + "unlisted": false, "metadata": { "categories": ["Analytics"], - "official": false, - "tags": ["google", "tag-manager", "gtm", "analytics", "tracking", "marketing"], - "short_description": "Manage Google Tag Manager resources including accounts, containers, workspaces, tags, triggers and variables via API.", - "mesh_description": "The Google Tag Manager MCP provides comprehensive integration with GTM's API, enabling programmatic management of all aspects of tag management infrastructure. This MCP allows AI agents and automation systems to create, read, update, and delete GTM resources including accounts, containers, workspaces, tags, triggers, and variables. It supports complete workspace management workflows, from creating new workspaces for isolated development to publishing changes to production containers. The integration enables automated tag deployment, bulk tag operations, configuration backups, and cross-container synchronization. Perfect for marketing teams, web analysts, and developers who need to manage tracking implementations at scale, automate tag deployments across multiple properties, or integrate GTM management into their CI/CD pipelines. Supports all GTM resource types and enables advanced automation scenarios for tag governance and compliance." - } + "official": false, + "tags": ["google", "tag-manager", "gtm", "analytics", "tracking", "marketing"], + "short_description": "Manage Google Tag Manager resources including accounts, containers, workspaces, tags, triggers and variables via API.", + "mesh_description": "The Google Tag Manager MCP provides comprehensive integration with GTM's API, enabling programmatic management of all aspects of tag management infrastructure. This MCP allows AI agents and automation systems to create, read, update, and delete GTM resources including accounts, containers, workspaces, tags, triggers, and variables. It supports complete workspace management workflows, from creating new workspaces for isolated development to publishing changes to production containers. The integration enables automated tag deployment, bulk tag operations, configuration backups, and cross-container synchronization. Perfect for marketing teams, web analysts, and developers who need to manage tracking implementations at scale, automate tag deployments across multiple properties, or integrate GTM management into their CI/CD pipelines. Supports all GTM resource types and enables advanced automation scenarios for tag governance and compliance." + } } diff --git a/google-tag-manager/package.json b/google-tag-manager/package.json index 5896495e..0f5caf6c 100644 --- a/google-tag-manager/package.json +++ b/google-tag-manager/package.json @@ -1,8 +1,8 @@ { "name": "google-tag-manager", "version": "1.0.0", - "description": "Google Tag Manager MCP Server - Manage accounts, containers, workspaces, tags, triggers and variables", "private": true, + "description": "Google Tag Manager MCP Server - Manage accounts, containers, workspaces, tags, triggers and variables", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", @@ -25,4 +25,3 @@ "node": ">=22.0.0" } } - diff --git a/google-tag-manager/tsconfig.json b/google-tag-manager/tsconfig.json index a7e0e946..b4b02a45 100644 --- a/google-tag-manager/tsconfig.json +++ b/google-tag-manager/tsconfig.json @@ -29,8 +29,5 @@ "server/*": ["./server/*"] } }, - "include": [ - "server" - ] + "include": ["server"] } - diff --git a/grain-official/README.md b/grain-official/README.md index ada7d988..81b2cc58 100644 --- a/grain-official/README.md +++ b/grain-official/README.md @@ -38,5 +38,4 @@ https://api.grain.com/_/mcp --- -*This MCP requires an active Grain account to function.* - +_This MCP requires an active Grain account to function._ diff --git a/grain/README.md b/grain/README.md index f16a0aab..ab481c48 100644 --- a/grain/README.md +++ b/grain/README.md @@ -67,13 +67,13 @@ Create the table manually in Supabase using [`server/db/schema.sql`](server/db/s ## Tools -| Tool | Purpose | -| --- | --- | -| `LIST_RECORDINGS` | List and filter recordings from Grain API | -| `GET_RECORDING` | Get full details for one recording, with optional selective output (`summary`, `transcript`, `highlights`) | -| `GET_TRANSCRIPT` | Get transcript URL and optional transcript content (`json`, `txt`, `srt`, `vtt`) | -| `GET_SUMMARY` | Get only summary fields (`summary`, `summary_points`, `intelligence_notes_md`) | -| `SEARCH_INDEXED_RECORDINGS` | Search Supabase-indexed recordings by query and/or date filters | +| Tool | Purpose | +| --------------------------- | ---------------------------------------------------------------------------------------------------------- | +| `LIST_RECORDINGS` | List and filter recordings from Grain API | +| `GET_RECORDING` | Get full details for one recording, with optional selective output (`summary`, `transcript`, `highlights`) | +| `GET_TRANSCRIPT` | Get transcript URL and optional transcript content (`json`, `txt`, `srt`, `vtt`) | +| `GET_SUMMARY` | Get only summary fields (`summary`, `summary_points`, `intelligence_notes_md`) | +| `SEARCH_INDEXED_RECORDINGS` | Search Supabase-indexed recordings by query and/or date filters | ## Environment Variables @@ -103,4 +103,3 @@ bun run build - Grain API base URL: `https://api.grain.com` - This project is part of the Deco MCP monorepo - diff --git a/grain/app.json b/grain/app.json index 5f38c083..fb44e40b 100644 --- a/grain/app.json +++ b/grain/app.json @@ -12,7 +12,16 @@ "metadata": { "categories": ["Productivity", "Communication"], "official": false, - "tags": ["grain", "meetings", "recordings", "transcripts", "ai-summaries", "video", "productivity", "collaboration"], + "tags": [ + "grain", + "meetings", + "recordings", + "transcripts", + "ai-summaries", + "video", + "productivity", + "collaboration" + ], "short_description": "Integrate with Grain to access and manage your meeting recordings, transcripts and AI summaries.", "mesh_description": "The Grain MCP is maintained by Deco and built on top of Grain public APIs. It lets AI agents list recordings, retrieve full meeting details, fetch only transcripts or summaries to reduce token usage, and search indexed recordings stored in Supabase. The MCP creates and manages Grain webhooks so new recording events are indexed automatically, enabling fast local search while still supporting direct API access for complete data." } diff --git a/grain/package.json b/grain/package.json index 28e85b53..1581c7d0 100644 --- a/grain/package.json +++ b/grain/package.json @@ -1,8 +1,8 @@ { "name": "grain", "version": "1.0.0", - "description": "Grain MCP - Access and manage your Grain recordings and meetings", "private": true, + "description": "Grain MCP - Access and manage your Grain recordings and meetings", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/grain/server/lib/grain-client.ts b/grain/server/lib/grain-client.ts index cb3d92d8..14e47dc4 100644 --- a/grain/server/lib/grain-client.ts +++ b/grain/server/lib/grain-client.ts @@ -136,7 +136,9 @@ export class GrainClient { return this.request( "GET", GRAIN_RECORDINGS_ENDPOINT, - { params: queryParams }, + { + params: queryParams, + }, ); } @@ -164,7 +166,9 @@ export class GrainClient { return this.request( "GET", `${GRAIN_RECORDINGS_ENDPOINT}/${recordingId}`, - { params }, + { + params, + }, ); } diff --git a/grain/tsconfig.json b/grain/tsconfig.json index 30810966..e760d47f 100644 --- a/grain/tsconfig.json +++ b/grain/tsconfig.json @@ -25,4 +25,3 @@ "include": ["server/**/*"], "exclude": ["node_modules", "dist"] } - diff --git a/hubspot-official/README.md b/hubspot-official/README.md index 8a89a1ea..5b5a37b6 100644 --- a/hubspot-official/README.md +++ b/hubspot-official/README.md @@ -40,5 +40,4 @@ https://app.hubspot.com/mcp/v1/http --- -*This MCP requires an active HubSpot account and API key.* - +_This MCP requires an active HubSpot account and API key._ diff --git a/hubspot-official/app.json b/hubspot-official/app.json index d3f02fe5..d6c7ce95 100644 --- a/hubspot-official/app.json +++ b/hubspot-official/app.json @@ -13,7 +13,17 @@ "categories": ["CRM"], "official": true, "mesh_unlisted": true, - "tags": ["crm", "marketing", "sales", "customer-service", "automation", "hubspot", "email-marketing", "lead-generation", "analytics"], + "tags": [ + "crm", + "marketing", + "sales", + "customer-service", + "automation", + "hubspot", + "email-marketing", + "lead-generation", + "analytics" + ], "short_description": "Manage your CRM, marketing, sales, and service operations with HubSpot", "mesh_description": "HubSpot is a comprehensive customer relationship management (CRM) platform that combines marketing, sales, customer service, and content management tools in one unified system. This official MCP provides natural language access to HubSpot's powerful features, enabling you to manage the entire customer lifecycle from first touch to loyal advocate. Create and update contacts, companies, and deals with custom properties and associations. Build and manage your sales pipeline with deal stages, forecasting, and revenue tracking. Design and execute email marketing campaigns with personalization, A/B testing, and automated follow-ups. Create landing pages, forms, and CTAs to capture leads and drive conversions. Set up sophisticated workflow automation for lead nurturing, task creation, and data enrichment. Manage customer support with ticketing system, knowledge base, and live chat integration. Access detailed analytics and reports on marketing performance, sales activities, and customer satisfaction metrics. Configure lead scoring rules to prioritize high-value prospects, create custom dashboards for team visibility, and set up sequences for automated email outreach. Integrate with hundreds of apps through HubSpot's App Marketplace. Perfect for growing businesses that need an all-in-one platform to attract, engage, and delight customers at scale." } diff --git a/hyperdx/app.json b/hyperdx/app.json index 9971fa36..3f229cfc 100644 --- a/hyperdx/app.json +++ b/hyperdx/app.json @@ -12,7 +12,17 @@ "metadata": { "categories": ["Observability"], "official": false, - "tags": ["hyperdx", "observability", "logs", "metrics", "traces", "monitoring", "debugging", "apm", "opentelemetry"], + "tags": [ + "hyperdx", + "observability", + "logs", + "metrics", + "traces", + "monitoring", + "debugging", + "apm", + "opentelemetry" + ], "short_description": "Query observability data from HyperDX. Search logs, metrics, and traces with time series charts and pattern analysis.", "mesh_description": "The HyperDX MCP provides deep integration with HyperDX, an open-source observability platform for developers. This MCP enables AI agents to search and analyze logs, query metrics, and explore distributed traces from applications instrumented with OpenTelemetry. It supports flexible log search with field-based filtering (level, service, site, etc.), time series chart queries with multiple aggregation functions (count, avg, sum, p50, p95, p99, etc.), and grouping by any field for detailed breakdowns. The integration allows for automated debugging workflows, error pattern detection, performance analysis, and incident investigation. Use it to search for errors across services, analyze request patterns, monitor application health, and correlate events across your distributed system. Ideal for developers, SREs, and DevOps teams who need to quickly investigate issues, understand system behavior, or build automated observability workflows. Supports both environment variable and Bearer token authentication for flexible deployment." } diff --git a/hyperdx/examples/discover-data-deco.md b/hyperdx/examples/discover-data-deco.md index ead34d63..4bae01f6 100644 --- a/hyperdx/examples/discover-data-deco.md +++ b/hyperdx/examples/discover-data-deco.md @@ -11,6 +11,7 @@ This shows how to use `DISCOVER_DATA` with domain-specific hints to map a HyperD ``` **What the hints do:** + - `section`, `loader`, `rendering`, `vtex`, `shopify` — text keywords searched in `level:error` logs, grouped by `[service, body]` - `cloud.provider`, `build.step`, `dispatch_namespace` — field names (contain `.` or `_`) queried as existence checks, grouped by field values @@ -42,51 +43,51 @@ Total events in window: 20,080,255 ## Log Levels - - ok: 14,097,710 (72.0%) - - log: 4,484,028 (22.9%) - - info: 956,661 (4.9%) - - warn: 412,639 (2.1%) - - error: 97,716 (0.5%) - - debug: 31,501 (0.2%) +- ok: 14,097,710 (72.0%) +- log: 4,484,028 (22.9%) +- info: 956,661 (4.9%) +- warn: 412,639 (2.1%) +- error: 97,716 (0.5%) +- debug: 31,501 (0.2%) **Note:** If the most common level is NOT "info" or "error" (e.g., "ok"), this instance uses non-standard levels. Adjust your queries accordingly. ## Active Services (top 15) - - fila-store (9,102,966 events) - - deco-chat-api (4,652,226 events) - - technos (3,164,504 events) - - deco-chat (720,547 events) - - farmrio (550,767 events) - - teciplast (340,178 events) - - oficina-reserva (197,021 events) - - als-storefront (173,420 events) - - lebiscuit (100,233 events) - - casaevideo (96,789 events) - - miess-01 (84,473 events) - - lojastorra-2 (72,225 events) - - osklenbr (65,460 events) - - lojabagaggio (57,753 events) - - montecarlo (42,741 events) +- fila-store (9,102,966 events) +- deco-chat-api (4,652,226 events) +- technos (3,164,504 events) +- deco-chat (720,547 events) +- farmrio (550,767 events) +- teciplast (340,178 events) +- oficina-reserva (197,021 events) +- als-storefront (173,420 events) +- lebiscuit (100,233 events) +- casaevideo (96,789 events) +- miess-01 (84,473 events) +- lojastorra-2 (72,225 events) +- osklenbr (65,460 events) +- lojabagaggio (57,753 events) +- montecarlo (42,741 events) ## Top Errors - - [deco-ai-gateway] Balance alert check failed (8091x) - - [cleanwhey] loader error AbortError: The signal has been aborted (3292x) - - [lojaintegradar] rendering: site/sections/TcoCalculator.tsx at https://landing.lojaintegrada.com.br/... (3074x) - - [farmrio] rendering: site/sections/FarmETC/ListingPage/EtcSearchFeatures.tsx ... (2109x) - - [maconequiio] rendering: commerce/sections/Seo/SeoPLP.tsx HttpError 402: Unavailable Shop (1809x) - - [lebiscuit] error sending request for url: lebiscuit.myvtex.com GraphQL validation failed (1353x) +- [deco-ai-gateway] Balance alert check failed (8091x) +- [cleanwhey] loader error AbortError: The signal has been aborted (3292x) +- [lojaintegradar] rendering: site/sections/TcoCalculator.tsx at https://landing.lojaintegrada.com.br/... (3074x) +- [farmrio] rendering: site/sections/FarmETC/ListingPage/EtcSearchFeatures.tsx ... (2109x) +- [maconequiio] rendering: commerce/sections/Seo/SeoPLP.tsx HttpError 402: Unavailable Shop (1809x) +- [lebiscuit] error sending request for url: lebiscuit.myvtex.com GraphQL validation failed (1353x) ## Dashboards - - "platform - traffic split" (5 charts) — fields: duration, http.response.status_code, process.tag.cloud.provider - - "platform - http" (23 charts) — fields: cache_hit, cloud.provider, duration, http.host, http.request.url, ... - - "platform - daily" (21 charts) — fields: build.step, cache_tar_size_mb, cloud.provider, process.tag.site.name, ... - - "platform - commerce" (10 charts) — fields: duration, service - - "decocms infra" (24 charts) — fields: actor.id, actor.method, db.sql.query, tool.id, mcp.tool.name, ... - - "Loaders Cache" (10 charts) — fields: cache_status, duration, service, span_name - - ... and 9 more +- "platform - traffic split" (5 charts) — fields: duration, http.response.status_code, process.tag.cloud.provider +- "platform - http" (23 charts) — fields: cache_hit, cloud.provider, duration, http.host, http.request.url, ... +- "platform - daily" (21 charts) — fields: build.step, cache_tar_size_mb, cloud.provider, process.tag.site.name, ... +- "platform - commerce" (10 charts) — fields: duration, service +- "decocms infra" (24 charts) — fields: actor.id, actor.method, db.sql.query, tool.id, mcp.tool.name, ... +- "Loaders Cache" (10 charts) — fields: cache_status, duration, service, span_name +- ... and 9 more ## Fields Used Across Dashboards @@ -95,38 +96,45 @@ HttpError, actor.id, actor.method, actor.name, build.step, cache_status, cache_t ## Domain-Specific Patterns ### "section" (keyword:section) - - [lojaintegradar] rendering: site/sections/TcoCalculator.tsx ... (3074x) - - [farmrio] rendering: site/sections/FarmETC/... (2109x) - - [docthos] rendering: site/sections/SEO/SeoPDPV2Custom.tsx ... (577x) + +- [lojaintegradar] rendering: site/sections/TcoCalculator.tsx ... (3074x) +- [farmrio] rendering: site/sections/FarmETC/... (2109x) +- [docthos] rendering: site/sections/SEO/SeoPDPV2Custom.tsx ... (577x) ### "loader" (keyword:loader) - - [cleanwhey] loader error AbortError: The signal has been aborted (3292x) - - [maconequiio] rendering: commerce/sections/Seo/SeoPLP.tsx HttpError 402 (1809x) - - [ladeira] loader error AbortError: The request has been cancelled (840x) + +- [cleanwhey] loader error AbortError: The signal has been aborted (3292x) +- [maconequiio] rendering: commerce/sections/Seo/SeoPLP.tsx HttpError 402 (1809x) +- [ladeira] loader error AbortError: The request has been cancelled (840x) ### "cloud.provider" (field:cloud.provider) - - kubernetes (352,248x) - - denodeploy (31,697x) - - deno_deploy (42x) - - localhost (2x) + +- kubernetes (352,248x) +- denodeploy (31,697x) +- deno_deploy (42x) +- localhost (2x) ### "vtex" (keyword:vtex) - - [macoteste] sections/Content/SeoPLP.tsx HttpError 400 (1273x) - - [ffloresta] vtex/loaders/intelligentSearch/productListingPage.ts Dangling reference (1261x) - - [miess-01] HttpError 429: Too Many Requests (270x) + +- [macoteste] sections/Content/SeoPLP.tsx HttpError 400 (1273x) +- [ffloresta] vtex/loaders/intelligentSearch/productListingPage.ts Dangling reference (1261x) +- [miess-01] HttpError 429: Too Many Requests (270x) ### "shopify" (keyword:shopify) - - [maconequiio] HttpError 402: Unavailable Shop (1809x) - - [renzi-co] HttpError 402: Unavailable Shop (20x) - - [zeedog-shopify] Cannot read properties of undefined (reading 'athena') (14x) + +- [maconequiio] HttpError 402: Unavailable Shop (1809x) +- [renzi-co] HttpError 402: Unavailable Shop (20x) +- [zeedog-shopify] Cannot read properties of undefined (reading 'athena') (14x) ### "build.step" (field:build.step) - - BUILD (3x) - - SITE_BUILD (3x) - - UPLOAD_RESULTS (3x) + +- BUILD (3x) +- SITE_BUILD (3x) +- UPLOAD_RESULTS (3x) ### "dispatch_namespace" (field:dispatch_namespace) - - deco-chat-prod (15,353x) + +- deco-chat-prod (15,353x) ## Query Tips diff --git a/hyperdx/package.json b/hyperdx/package.json index ccb445f3..a8fdfead 100644 --- a/hyperdx/package.json +++ b/hyperdx/package.json @@ -1,8 +1,8 @@ { "name": "hyperdx", "version": "1.0.0", - "description": "MCP server for HyperDX observability platform", "private": true, + "description": "MCP server for HyperDX observability platform", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/indeed-official/README.md b/indeed-official/README.md index d00395dd..375eff84 100644 --- a/indeed-official/README.md +++ b/indeed-official/README.md @@ -39,5 +39,4 @@ https://mcp.indeed.com/claude/mcp --- -*This MCP provides access to Indeed's job search and recruitment platform.* - +_This MCP provides access to Indeed's job search and recruitment platform._ diff --git a/indeed-official/app.json b/indeed-official/app.json index bfa22b2d..9b3dd862 100644 --- a/indeed-official/app.json +++ b/indeed-official/app.json @@ -13,7 +13,17 @@ "categories": ["Job Board"], "official": true, "mesh_unlisted": true, - "tags": ["jobs", "recruitment", "hiring", "job-search", "indeed", "hr", "talent-acquisition", "careers", "employment"], + "tags": [ + "jobs", + "recruitment", + "hiring", + "job-search", + "indeed", + "hr", + "talent-acquisition", + "careers", + "employment" + ], "short_description": "Search and interact with job listings on the world's largest job site", "mesh_description": "Indeed is the world's #1 job site with over 350 million unique visitors per month and millions of job listings across all industries and experience levels. This official MCP enables job seekers to search and filter through comprehensive job listings using natural language queries, save favorite jobs, set up job alerts for specific criteria, and research companies with reviews and ratings from current and former employees. For employers and recruiters, the MCP provides tools to post job listings with detailed descriptions, requirements, and company information. Manage applications with applicant tracking features, review candidate profiles and resumes, schedule interviews, and communicate with applicants directly through the platform. Access powerful analytics on job posting performance including views, clicks, and application rates. Use Indeed's matching technology to find qualified candidates based on skills, experience, and location. Leverage salary insights and market data to create competitive compensation packages. Set up sponsored job postings to increase visibility and reach more qualified candidates. Configure screening questions to pre-filter applicants and save time in the hiring process. Perfect for both job seekers looking for their next opportunity and companies building their teams with top talent." } diff --git a/jam-dev/README.md b/jam-dev/README.md index 691ec435..34bd898a 100644 --- a/jam-dev/README.md +++ b/jam-dev/README.md @@ -39,5 +39,4 @@ https://mcp.jam.dev/mcp --- -*This MCP requires an active Jam account to function.* - +_This MCP requires an active Jam account to function._ diff --git a/jam-dev/app.json b/jam-dev/app.json index 24e8aa17..63b111ac 100644 --- a/jam-dev/app.json +++ b/jam-dev/app.json @@ -12,7 +12,17 @@ "metadata": { "categories": ["Software Development"], "official": true, - "tags": ["debugging", "bug-tracking", "developer-tools", "qa", "testing", "jam", "screenshots", "browser-extension", "collaboration"], + "tags": [ + "debugging", + "bug-tracking", + "developer-tools", + "qa", + "testing", + "jam", + "screenshots", + "browser-extension", + "collaboration" + ], "short_description": "Debug faster with instant comprehensive bug reports", "mesh_description": "Jam is the fastest way to capture and share bugs, automatically including everything developers need to debug - screenshots, console logs, network requests, browser info, and device specs in a single click. This official MCP enables development teams to dramatically reduce time spent reproducing and debugging issues. When a bug is reported through Jam, it automatically captures the current page state including DOM snapshot, JavaScript console errors and warnings, network activity with request/response details, browser and OS information, screen resolution, and installed extensions. Create annotated screenshots and screen recordings with drawing tools to highlight specific issues. Reproduce bugs reliably with session replay that shows exactly what the user did. Integrate seamlessly with issue trackers including Jira, Linear, GitHub Issues, Asana, and ClickUp to create tickets with all debugging context attached. Share bugs via links without requiring recipients to have Jam installed. Use Jam for QA testing, user acceptance testing, production monitoring, and customer support. Collaborate with team members by adding comments and status updates. Track bug resolution progress and gather metrics on common issues. Configure custom templates for bug reports specific to your workflow. Perfect for product teams, QA engineers, and support teams who need to communicate bugs clearly and help developers fix issues faster without lengthy back-and-forth for missing information." } diff --git a/jotform/README.md b/jotform/README.md index d66a693d..1e1b82c4 100644 --- a/jotform/README.md +++ b/jotform/README.md @@ -24,8 +24,8 @@ Ask your AI assistant to: ## Compatibility -| Product | Deployment type | Support status | -|---------|----------------|----------------| +| Product | Deployment type | Support status | +| ------- | ----------------------- | ---------------------------------------------- | | Jotform | Cloud (hosted endpoint) | ✅ Fully supported via https://mcp.jotform.com | ## Quick Start Guide @@ -172,10 +172,10 @@ Access control is managed via OAuth scopes. Only explicitly granted scopes are a Rate limits (same per-user as the Jotform REST API): -| Plan | Requests per minute | -|------|---------------------| -| Free | 60 | -| Enterprise | 600 | +| Plan | Requests per minute | +| ---------- | ------------------- | +| Free | 60 | +| Enterprise | 600 | If limits are exceeded, the server returns HTTP 429 with a Retry-After header. diff --git a/jumpcloud/README.md b/jumpcloud/README.md index 5ced8472..f7e68942 100644 --- a/jumpcloud/README.md +++ b/jumpcloud/README.md @@ -5,11 +5,13 @@ **JumpCloud MCP** is a Model Context Protocol (MCP) server that gives AI assistants access to JumpCloud's directory and identity management platform for managing users, devices, and access policies. ### Purpose + - Query and manage users, groups, and devices across your JumpCloud directory - Automate identity lifecycle tasks such as provisioning and deprovisioning - Inspect and enforce access policies, SSO applications, and MFA settings ### Key Features + - 👤 Manage users and user groups in your directory - 💻 Query and manage enrolled devices and system inventory - 🔐 Configure SSO applications and access permissions diff --git a/jumpcloud/app.json b/jumpcloud/app.json index 20eab4ed..6522b1d9 100644 --- a/jumpcloud/app.json +++ b/jumpcloud/app.json @@ -13,7 +13,14 @@ "metadata": { "categories": ["Infrastructure", "Automation", "Security"], "official": true, - "tags": ["jumpcloud", "directory-management", "llm", "automation", "access-control", "identity"], + "tags": [ + "jumpcloud", + "directory-management", + "llm", + "automation", + "access-control", + "identity" + ], "short_description": "Manage JumpCloud directory resources with AI-powered automation", "mesh_description": "Provides an API to LLMs to manage JumpCloud resources. Enables LLMs to interact with JumpCloud's directory platform, allowing for automated user management, access control, and device management. Manage users, groups, policies, and devices across your organization. Perfect for IT teams seeking to automate identity and access management tasks with AI assistance." } diff --git a/linear/README.md b/linear/README.md index 28f2eb96..c04cd094 100644 --- a/linear/README.md +++ b/linear/README.md @@ -7,6 +7,7 @@ ### Purpose This MCP server allows client applications to: + - Create, update, and query Linear issues and projects - Manage workflows, cycles, and team assignments - Track progress and status across engineering projects diff --git a/mcp-studio/README.md b/mcp-studio/README.md index 7b5a13d9..5431bf88 100644 --- a/mcp-studio/README.md +++ b/mcp-studio/README.md @@ -1,3 +1,3 @@ -# MCP Studio +# MCP Studio Your MCP server description goes here. diff --git a/mcp-studio/app.json b/mcp-studio/app.json index a21c5d76..89d7398b 100644 --- a/mcp-studio/app.json +++ b/mcp-studio/app.json @@ -9,4 +9,4 @@ "description": "An app that allows you to create and manage MCPs", "icon": "https://assets.decocache.com/mcp/09e44283-f47d-4046-955f-816d227c626f/app.png", "unlisted": false -} \ No newline at end of file +} diff --git a/mcp-studio/package.json b/mcp-studio/package.json index c4c58ef2..87f569c7 100644 --- a/mcp-studio/package.json +++ b/mcp-studio/package.json @@ -1,8 +1,8 @@ { "name": "mcp-studio", "version": "1.0.0", - "description": "Template for MCP with Vite + React view", "private": true, + "description": "Template for MCP with Vite + React view", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", @@ -28,9 +28,9 @@ "zod": "^4.0.0" }, "devDependencies": { + "@decocms/mcps-shared": "1.0.0", "deco-cli": "^0.28.0", - "typescript": "^5.7.2", - "@decocms/mcps-shared": "1.0.0" + "typescript": "^5.7.2" }, "engines": { "node": ">=22.0.0" diff --git a/mcp-studio/server/sandbox/utils/to-quickjs.ts b/mcp-studio/server/sandbox/utils/to-quickjs.ts index f1d651d5..2d25c0f9 100644 --- a/mcp-studio/server/sandbox/utils/to-quickjs.ts +++ b/mcp-studio/server/sandbox/utils/to-quickjs.ts @@ -36,9 +36,7 @@ export function toQuickJS(ctx: QuickJSContext, value: unknown): QuickJSHandle { } case "function": { // Create a host function bridge that can be called from guest context - const functionId = `__hostFn_${Date.now()}_${Math.random() - .toString(36) - .substr(2, 9)}`; + const functionId = `__hostFn_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`; // Store the function in a way that can be accessed from guest context // We'll create a proxy function that calls the original function diff --git a/mcp-studio/tsconfig.json b/mcp-studio/tsconfig.json index 65a71cd7..d1f4a7bc 100644 --- a/mcp-studio/tsconfig.json +++ b/mcp-studio/tsconfig.json @@ -2,9 +2,7 @@ "compilerOptions": { "target": "ES2022", "useDefineForClassFields": true, - "lib": [ - "ES2023", - ], + "lib": ["ES2023"], "module": "ESNext", "skipLibCheck": true, /* Bundler mode */ @@ -24,16 +22,10 @@ /* Path Aliases */ "baseUrl": ".", "paths": { - "server/*": [ - "./server/*" - ], + "server/*": ["./server/*"] }, /* Types */ - "types": [ - "@types/node", - ] + "types": ["@types/node"] }, - "include": [ - "server", - ] -} \ No newline at end of file + "include": ["server"] +} diff --git a/medusa/README.md b/medusa/README.md index 2bed2521..381b24fd 100644 --- a/medusa/README.md +++ b/medusa/README.md @@ -5,11 +5,13 @@ **Medusa MCP** is a Model Context Protocol (MCP) server that provides AI assistants with access to the MedusaJS headless commerce documentation and developer resources. ### Purpose + - Browse and search the official MedusaJS documentation from within AI-powered tools - Quickly retrieve guides, API references, and integration documentation - Accelerate development by surfacing relevant Medusa concepts and code examples on demand ### Key Features + - 📚 Full access to MedusaJS official documentation content - 🔍 Search across guides, API references, and tutorials - 🛒 Coverage of commerce concepts: products, orders, carts, customers, and more diff --git a/mercadolibre-official/README.md b/mercadolibre-official/README.md index 5d2f2e67..8b3bcf8e 100644 --- a/mercadolibre-official/README.md +++ b/mercadolibre-official/README.md @@ -40,5 +40,4 @@ https://mcp.mercadolibre.com/mcp --- -*This MCP requires an active Mercado Libre seller account and API credentials.* - +_This MCP requires an active Mercado Libre seller account and API credentials._ diff --git a/mercadolibre-official/app.json b/mercadolibre-official/app.json index 2ac93b8a..9327c95e 100644 --- a/mercadolibre-official/app.json +++ b/mercadolibre-official/app.json @@ -13,7 +13,17 @@ "categories": ["E-Commerce"], "official": true, "mesh_unlisted": true, - "tags": ["e-commerce", "marketplace", "online-selling", "latin-america", "mercadolibre", "retail", "inventory", "shipping", "sales"], + "tags": [ + "e-commerce", + "marketplace", + "online-selling", + "latin-america", + "mercadolibre", + "retail", + "inventory", + "shipping", + "sales" + ], "short_description": "Manage your e-commerce operations on Latin America's largest marketplace", "mesh_description": "Mercado Libre is Latin America's leading e-commerce platform with over 140 million active users across 18 countries including Argentina, Brazil, Mexico, Chile, Colombia, and Peru. This official MCP enables sellers to create and manage product listings with detailed descriptions, multiple images, specifications, and competitive pricing strategies. Manage your inventory across multiple warehouses and fulfillment centers with real-time stock updates. Process orders from receipt to fulfillment including order confirmation, payment verification, and shipping coordination. Integrate with Mercado Envíos for streamlined shipping with automatic label generation, tracking updates, and delivery confirmation. Handle customer inquiries and questions through the platform's messaging system with automated responses and quick replies. Monitor your seller reputation and performance metrics including response time, shipping accuracy, and customer satisfaction scores. Access detailed sales analytics with revenue reports, best-selling products, traffic sources, and conversion rates. Set up promotional campaigns with discounts, bundles, and special offers to boost sales. Manage returns and refunds efficiently while maintaining high seller ratings. Use the cross-border trade features to sell internationally within Latin America. Configure multiple payment options and installment plans to increase conversion rates. Perfect for sellers looking to grow their business in Latin American markets." } diff --git a/mercadopago-official/README.md b/mercadopago-official/README.md index 6aeef156..e10518ea 100644 --- a/mercadopago-official/README.md +++ b/mercadopago-official/README.md @@ -40,5 +40,4 @@ https://mcp.mercadopago.com/mcp --- -*This MCP requires an active Mercado Pago account and API credentials.* - +_This MCP requires an active Mercado Pago account and API credentials._ diff --git a/mercadopago-official/app.json b/mercadopago-official/app.json index 5f0d2302..053ed68c 100644 --- a/mercadopago-official/app.json +++ b/mercadopago-official/app.json @@ -13,7 +13,17 @@ "categories": ["Payments"], "official": true, "mesh_unlisted": true, - "tags": ["payments", "fintech", "transactions", "latin-america", "mercadopago", "gateway", "subscriptions", "refunds", "checkout"], + "tags": [ + "payments", + "fintech", + "transactions", + "latin-america", + "mercadopago", + "gateway", + "subscriptions", + "refunds", + "checkout" + ], "short_description": "Process payments and manage financial operations with Latin America's leading fintech", "mesh_description": "Mercado Pago is Latin America's largest fintech platform processing billions of dollars in transactions annually across 18 countries. This official MCP provides comprehensive payment processing capabilities for credit cards, debit cards, digital wallets, bank transfers, cash payments, and cryptocurrencies. Create secure checkout experiences with customizable payment forms, one-click checkout for returning customers, and mobile-optimized payment flows. Process payments with smart routing to maximize approval rates, automatic retry logic for failed transactions, and real-time fraud detection and prevention. Manage recurring subscriptions with flexible billing cycles, automatic payment collection, and dunning management for failed payments. Handle refunds, chargebacks, and disputes efficiently with automated workflows and detailed transaction history. Access comprehensive financial analytics including transaction volumes, approval rates, payment methods distribution, and revenue trends. Configure installment payments with interest-free or interest-bearing options to increase conversion rates. Integrate with major e-commerce platforms, point-of-sale systems, and custom applications through robust APIs. Use QR code payments for in-person transactions with instant confirmation. Manage seller balances, withdrawals, and settlement schedules. Implement 3D Secure authentication for enhanced security compliance. Perfect for businesses of all sizes processing payments in Latin American markets with multi-currency support and local payment methods." } diff --git a/meta-ads/README.md b/meta-ads/README.md index dcbe7f81..fe135ac8 100644 --- a/meta-ads/README.md +++ b/meta-ads/README.md @@ -1,4 +1,4 @@ -# Meta Ads MCP +# Meta Ads MCP Complete MCP for managing and analyzing Meta/Facebook Ads campaigns. @@ -17,60 +17,68 @@ This MCP provides comprehensive tools to **create, manage, and analyze** your Me ## Available Tools ### Accounts - User Token Tools (3 tools) -| Tool | Description | -|------|-------------| -| `META_ADS_GET_USER_INFO` | Get authenticated user information (User Token only) | -| `META_ADS_GET_USER_AD_ACCOUNTS` | List accessible ad accounts (User Token only) | -| `META_ADS_GET_USER_ACCOUNT_PAGES` | Pages associated with the user (User Token only) | + +| Tool | Description | +| --------------------------------- | ---------------------------------------------------- | +| `META_ADS_GET_USER_INFO` | Get authenticated user information (User Token only) | +| `META_ADS_GET_USER_AD_ACCOUNTS` | List accessible ad accounts (User Token only) | +| `META_ADS_GET_USER_ACCOUNT_PAGES` | Pages associated with the user (User Token only) | ### Accounts - Page Token Tools (3 tools) -| Tool | Description | -|------|-------------| -| `META_ADS_GET_PAGE_INFO` | Get current page information (Page Token only) | -| `META_ADS_GET_PAGE_AD_ACCOUNTS` | List ad accounts associated with the page (Page Token only) | -| `META_ADS_GET_PAGE_ACCOUNT_PAGES` | Get current page details (Page Token only) | + +| Tool | Description | +| --------------------------------- | ----------------------------------------------------------- | +| `META_ADS_GET_PAGE_INFO` | Get current page information (Page Token only) | +| `META_ADS_GET_PAGE_AD_ACCOUNTS` | List ad accounts associated with the page (Page Token only) | +| `META_ADS_GET_PAGE_ACCOUNT_PAGES` | Get current page details (Page Token only) | ### Accounts - Universal Tools (1 tool) -| Tool | Description | -|------|-------------| + +| Tool | Description | +| --------------------------- | -------------------------------------------------------------------------- | | `META_ADS_GET_ACCOUNT_INFO` | Account details (currency, timezone, status) - works with both token types | ### Campaigns (5 tools) -| Tool | Description | -|------|-------------| -| `META_ADS_GET_CAMPAIGNS` | List campaigns with status filter | -| `META_ADS_GET_CAMPAIGN_DETAILS` | Details of a specific campaign | -| `META_ADS_CREATE_CAMPAIGN` | Create a new campaign with objective, budget, and schedule | -| `META_ADS_UPDATE_CAMPAIGN` | Update/pause/activate a campaign | -| `META_ADS_DELETE_CAMPAIGN` | Delete a campaign | + +| Tool | Description | +| ------------------------------- | ---------------------------------------------------------- | +| `META_ADS_GET_CAMPAIGNS` | List campaigns with status filter | +| `META_ADS_GET_CAMPAIGN_DETAILS` | Details of a specific campaign | +| `META_ADS_CREATE_CAMPAIGN` | Create a new campaign with objective, budget, and schedule | +| `META_ADS_UPDATE_CAMPAIGN` | Update/pause/activate a campaign | +| `META_ADS_DELETE_CAMPAIGN` | Delete a campaign | ### Ad Sets (5 tools) -| Tool | Description | -|------|-------------| -| `META_ADS_GET_ADSETS` | List ad sets with campaign filter | -| `META_ADS_GET_ADSET_DETAILS` | Ad set details (targeting, budget) | -| `META_ADS_CREATE_ADSET` | Create ad set with targeting, budget, and optimization | -| `META_ADS_UPDATE_ADSET` | Update/pause/activate an ad set | -| `META_ADS_DELETE_ADSET` | Delete an ad set | + +| Tool | Description | +| ---------------------------- | ------------------------------------------------------ | +| `META_ADS_GET_ADSETS` | List ad sets with campaign filter | +| `META_ADS_GET_ADSET_DETAILS` | Ad set details (targeting, budget) | +| `META_ADS_CREATE_ADSET` | Create ad set with targeting, budget, and optimization | +| `META_ADS_UPDATE_ADSET` | Update/pause/activate an ad set | +| `META_ADS_DELETE_ADSET` | Delete an ad set | ### Ads (5 tools) -| Tool | Description | -|------|-------------| -| `META_ADS_GET_ADS` | List ads with ad set filter | -| `META_ADS_GET_AD_DETAILS` | Ad details | -| `META_ADS_GET_AD_CREATIVES` | Get creative details for an ad | -| `META_ADS_CREATE_AD` | Create a new ad with a creative | -| `META_ADS_UPDATE_AD` | Update/pause/activate an ad | -| `META_ADS_DELETE_AD` | Delete an ad | + +| Tool | Description | +| --------------------------- | ------------------------------- | +| `META_ADS_GET_ADS` | List ads with ad set filter | +| `META_ADS_GET_AD_DETAILS` | Ad details | +| `META_ADS_GET_AD_CREATIVES` | Get creative details for an ad | +| `META_ADS_CREATE_AD` | Create a new ad with a creative | +| `META_ADS_UPDATE_AD` | Update/pause/activate an ad | +| `META_ADS_DELETE_AD` | Delete an ad | ### Creatives (1 tool) -| Tool | Description | -|------|-------------| + +| Tool | Description | +| ----------------------------- | -------------------------------------------------------------------- | | `META_ADS_CREATE_AD_CREATIVE` | Create a creative with text and CTA (use existing posts or link ads) | ### Insights (1 tool) -| Tool | Description | -|------|-------------| + +| Tool | Description | +| ----------------------- | ----------------------------------- | | `META_ADS_GET_INSIGHTS` | Performance metrics with breakdowns | ## Insights Metrics @@ -91,17 +99,18 @@ This MCP uses an Access Token for authentication with the Meta Graph API. When installing the MCP, you'll need to provide: -| Field | Required | Description | -|-------|----------|-------------| -| `META_APP_ID` | Yes | Your Meta App ID from [developers.facebook.com/apps](https://developers.facebook.com/apps/) | -| `META_APP_SECRET` | Yes | Your Meta App Secret from App Settings > Basic | -| `META_ACCESS_TOKEN` | Yes | Access Token from Graph API Explorer | +| Field | Required | Description | +| ------------------- | -------- | ------------------------------------------------------------------------------------------- | +| `META_APP_ID` | Yes | Your Meta App ID from [developers.facebook.com/apps](https://developers.facebook.com/apps/) | +| `META_APP_SECRET` | Yes | Your Meta App Secret from App Settings > Basic | +| `META_ACCESS_TOKEN` | Yes | Access Token from Graph API Explorer | ### Automatic Token Exchange 🔄 **The MCP automatically exchanges short-lived tokens for long-lived tokens!** When you provide your App ID and App Secret: + 1. Short-lived tokens (~1 hour) are automatically exchanged for long-lived tokens (~60 days) 2. The exchange happens transparently on first API call 3. Long-lived tokens are cached for the session @@ -111,12 +120,14 @@ This means you can paste a fresh token from Graph API Explorer and it will be au ### How to Get Your Credentials #### Step 1: Get App ID and App Secret + 1. Go to [Meta for Developers](https://developers.facebook.com/apps/) 2. Create a new app or select an existing one 3. Go to **Settings > Basic** 4. Copy your **App ID** and **App Secret** #### Step 2: Get Access Token + 1. Go to [Graph API Explorer](https://developers.facebook.com/tools/explorer/) 2. Select your App from the dropdown 3. Click "Generate Access Token" @@ -135,19 +146,21 @@ When generating your token, grant these permissions: ### Token Types Supported **Two types of tokens are supported:** + - **User Access Token**: Access all ad accounts and pages for a user - **Page Access Token**: Access ad accounts and data for a specific page Use the appropriate tools: + - **User Token**: Use `META_ADS_GET_USER_*` tools - **Page Token**: Use `META_ADS_GET_PAGE_*` tools - **Both**: Universal tools like `META_ADS_GET_INSIGHTS` work with either token type ### Token Duration -| Token Type | Duration | -|------------|----------| -| Short-lived (from Graph Explorer) | ~1 hour | +| Token Type | Duration | +| ------------------------------------- | -------- | +| Short-lived (from Graph Explorer) | ~1 hour | | Long-lived (after automatic exchange) | ~60 days | > ⚠️ **Important**: Long-lived tokens expire after ~60 days. You'll need to generate a new token and update the configuration when this happens. @@ -193,30 +206,30 @@ To create a full campaign from scratch, follow this flow: ### Campaign Objectives -| Objective | Use Case | -|-----------|----------| -| `OUTCOME_TRAFFIC` | Drive website visits | -| `OUTCOME_ENGAGEMENT` | Get likes, comments, shares | -| `OUTCOME_LEADS` | Collect leads via forms | -| `OUTCOME_SALES` | Drive purchases/conversions | -| `OUTCOME_AWARENESS` | Reach and brand awareness | -| `OUTCOME_APP_PROMOTION` | App installs | +| Objective | Use Case | +| ----------------------- | --------------------------- | +| `OUTCOME_TRAFFIC` | Drive website visits | +| `OUTCOME_ENGAGEMENT` | Get likes, comments, shares | +| `OUTCOME_LEADS` | Collect leads via forms | +| `OUTCOME_SALES` | Drive purchases/conversions | +| `OUTCOME_AWARENESS` | Reach and brand awareness | +| `OUTCOME_APP_PROMOTION` | App installs | ## Usage Examples ### Reading Data ``` -1. "List my ad accounts" +1. "List my ad accounts" -> META_ADS_GET_USER_AD_ACCOUNTS -2. "Show active campaigns for account act_123" +2. "Show active campaigns for account act_123" -> META_ADS_GET_CAMPAIGNS(account_id: "act_123", status_filter: "ACTIVE") 3. "How is campaign X performing in the last 7 days?" -> META_ADS_GET_INSIGHTS(object_id: "campaign_id", date_preset: "last_7d") -4. "Compare results by age and gender" +4. "Compare results by age and gender" -> META_ADS_GET_INSIGHTS(object_id: "campaign_id", breakdowns: ["age", "gender"]) 5. "Which ads have the best CTR?" diff --git a/meta-ads/app.json b/meta-ads/app.json index 03732c51..19d5da1a 100644 --- a/meta-ads/app.json +++ b/meta-ads/app.json @@ -12,9 +12,17 @@ "metadata": { "categories": ["Marketing"], "official": false, - "tags": ["meta", "facebook", "instagram", "ads", "advertising", "analytics", "marketing", "campaigns"], + "tags": [ + "meta", + "facebook", + "instagram", + "ads", + "advertising", + "analytics", + "marketing", + "campaigns" + ], "short_description": "Meta Ads Analytics - Analyze performance of Meta/Facebook advertising campaigns with detailed insights and metrics", "mesh_description": "The Meta Ads Analytics MCP provides deep integration with Meta's advertising platform, offering comprehensive tools to analyze, monitor, and optimize advertising campaigns across Facebook and Instagram. This MCP enables AI agents to retrieve detailed campaign performance metrics, including impressions, clicks, conversions, cost data, and audience engagement statistics. It supports querying campaign hierarchies (campaigns, ad sets, ads), accessing demographic breakdowns, analyzing time-series performance data, and generating custom reports. The integration allows for automated performance monitoring, anomaly detection in ad spend, ROI analysis, and budget optimization recommendations. Ideal for digital marketers, advertising agencies, and data analysts who need to track campaign performance, generate client reports, or build automated optimization workflows. Provides access to Meta's Marketing API for programmatic campaign insights and decision-making support." } } - diff --git a/meta-ads/package.json b/meta-ads/package.json index c993d984..97aa8c56 100644 --- a/meta-ads/package.json +++ b/meta-ads/package.json @@ -1,8 +1,8 @@ { "name": "meta-ads", "version": "1.0.0", - "description": "Meta Ads Analytics - Performance analysis for Meta/Facebook advertising campaigns", "private": true, + "description": "Meta Ads Analytics - Performance analysis for Meta/Facebook advertising campaigns", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", @@ -25,4 +25,4 @@ "engines": { "node": ">=22.0.0" } -} \ No newline at end of file +} diff --git a/meta-ads/server/lib/meta-client.ts b/meta-ads/server/lib/meta-client.ts index 541e4223..e8fdd9b0 100644 --- a/meta-ads/server/lib/meta-client.ts +++ b/meta-ads/server/lib/meta-client.ts @@ -459,7 +459,9 @@ export class MetaAdsClient { return makeRequest>( this.config, `/${formattedId}/campaigns`, - { params }, + { + params, + }, ); } @@ -1090,7 +1092,9 @@ export class MetaAdsClient { return makeRequest>( this.config, `/${objectId}/insights`, - { params: queryParams }, + { + params: queryParams, + }, ); } } diff --git a/monday/README.md b/monday/README.md index 68c13ab9..913715e6 100644 --- a/monday/README.md +++ b/monday/README.md @@ -127,9 +127,7 @@ To interact with monday.com's API, you'll need an API token: "mcpServers": { "monday-api-mcp": { "command": "npx", - "args": [ - "@mondaydotcomorg/monday-api-mcp@latest" - ], + "args": ["@mondaydotcomorg/monday-api-mcp@latest"], "env": { "MONDAY_TOKEN": "your_monday_api_token" } @@ -193,9 +191,7 @@ Add to your settings: "mcpServers": { "monday-api-mcp": { "command": "npx", - "args": [ - "@mondaydotcomorg/monday-api-mcp@latest" - ], + "args": ["@mondaydotcomorg/monday-api-mcp@latest"], "env": { "MONDAY_TOKEN": "your_monday_api_token" } @@ -273,22 +269,22 @@ For OAuth authentication and workspace controls, install the Monday MCP app from Our MCP server provides a rich set of tools that give AI assistants the ability to interact with monday.com: -| Category | Tool | Description | -|----------|------|-------------| -| **Item Operations** | create_item | Create a new item in a monday.com board with specified column values | -| | delete_item | Delete an item from a board permanently | -| | get_board_items_by_name | Search for items by board ID and term/name | -| | create_update | Add an update/comment to a specific item | -| | change_item_column_values | Modify the column values of an existing item | -| | move_item_to_group | Move an item to a different group within the same board | -| **Board Operations** | create_board | Create a new monday.com board with specified columns | -| | get_board_schema | Retrieve the structure of columns and groups for a board | -| | create_group | Create a new group in a monday.com board | -| | create_column | Add a new column to an existing board | -| | delete_column | Remove a column from a board | -| **Account Operations** | list_users_and_teams | Retrieve user or team's details by id, name or by searching the account | -| **WorkForms Operations** | create_form | Create a new monday.com form | -| | get_form | Get a form by its token | +| Category | Tool | Description | +| ------------------------ | ------------------------- | ----------------------------------------------------------------------- | +| **Item Operations** | create_item | Create a new item in a monday.com board with specified column values | +| | delete_item | Delete an item from a board permanently | +| | get_board_items_by_name | Search for items by board ID and term/name | +| | create_update | Add an update/comment to a specific item | +| | change_item_column_values | Modify the column values of an existing item | +| | move_item_to_group | Move an item to a different group within the same board | +| **Board Operations** | create_board | Create a new monday.com board with specified columns | +| | get_board_schema | Retrieve the structure of columns and groups for a board | +| | create_group | Create a new group in a monday.com board | +| | create_column | Add a new column to an existing board | +| | delete_column | Remove a column from a board | +| **Account Operations** | list_users_and_teams | Retrieve user or team's details by id, name or by searching the account | +| **WorkForms Operations** | create_form | Create a new monday.com form | +| | get_form | Get a form by its token | ## 🎨 monday.com Apps Framework Tools @@ -310,11 +306,11 @@ Dynamic API Tools provide AI agents with complete, adaptable access to monday.co ### Key Dynamic API Tools -| Tool | Description | -|------|-------------| -| all_monday_api | Generate and execute any GraphQL query or mutation dynamically | +| Tool | Description | +| ------------------ | -------------------------------------------------------------------- | +| all_monday_api | Generate and execute any GraphQL query or mutation dynamically | | get_graphql_schema | Fetch monday.com's GraphQL schema to understand available operations | -| get_type_details | Retrieve detailed information about specific GraphQL types | +| get_type_details | Retrieve detailed information about specific GraphQL types | ### Unlocked Possibilities @@ -345,13 +341,13 @@ When 'only' mode is enabled, the server will provide just the Dynamic API Tools, ## 🖥️ MCP Server Configuration -| Argument | Flags | Description | Required | Default | -|----------|-------|-------------|----------|---------| -| monday.com API Token | `--token`, `-t` | monday.com API token | Yes | - | -| API Version | `--version`, `-v` | monday.com API version | No | `current` | -| Mode | `--mode`, `-m` | Tool mode: `default` for standard platform tools, `apps` for Apps Framework tools | No | `default` | -| Read Only Mode | `--read-only`, `-ro` | Enable read-only mode | No | `false` | -| Dynamic API Tools | `--enable-dynamic-api-tools`, `-edat` | Enable dynamic API tools | No | `false` | +| Argument | Flags | Description | Required | Default | +| -------------------- | ------------------------------------- | --------------------------------------------------------------------------------- | -------- | --------- | +| monday.com API Token | `--token`, `-t` | monday.com API token | Yes | - | +| API Version | `--version`, `-v` | monday.com API version | No | `current` | +| Mode | `--mode`, `-m` | Tool mode: `default` for standard platform tools, `apps` for Apps Framework tools | No | `default` | +| Read Only Mode | `--read-only`, `-ro` | Enable read-only mode | No | `false` | +| Dynamic API Tools | `--enable-dynamic-api-tools`, `-edat` | Enable dynamic API tools | No | `false` | ## 🔐 Authentication & Security diff --git a/monday/app.json b/monday/app.json index 093342cb..ab5f31f5 100644 --- a/monday/app.json +++ b/monday/app.json @@ -13,7 +13,14 @@ "metadata": { "categories": ["Productivity", "Automation"], "official": true, - "tags": ["monday.com", "project-management", "automation", "workflows", "integration", "boards"], + "tags": [ + "monday.com", + "project-management", + "automation", + "workflows", + "integration", + "boards" + ], "short_description": "MCP server for monday.com - manage boards, items, and automations", "mesh_description": "Connects to monday.com, enabling integration with its project management platform. It provides a bridge for interacting with monday.com boards, tasks, and automations. Manage work items, update statuses, create automations, and build custom workflows. Perfect for teams using monday.com for project management who want AI-powered assistance with task management and workflow automation." } diff --git a/multi-channel-inbox/.mcp.json b/multi-channel-inbox/.mcp.json index 62301248..0012c699 100644 --- a/multi-channel-inbox/.mcp.json +++ b/multi-channel-inbox/.mcp.json @@ -1,8 +1,8 @@ { - "mcpServers": { - "mcp-app": { - "type": "sse", - "url": "http://localhost:3001/api/mcp" - } - } + "mcpServers": { + "mcp-app": { + "type": "sse", + "url": "http://localhost:3001/api/mcp" + } + } } diff --git a/multi-channel-inbox/README.md b/multi-channel-inbox/README.md index b572bd13..22380d10 100644 --- a/multi-channel-inbox/README.md +++ b/multi-channel-inbox/README.md @@ -1,4 +1,4 @@ -# Multi Channel Inbox +# Multi Channel Inbox Unified support inbox that aggregates messages from Slack, Discord and Gmail into a single interface with conversation tracking, AI classification and cross-platform replies. @@ -38,23 +38,25 @@ The `inbox_resolve_conversation` tool marks a conversation as resolved and, for ## Bindings (Mesh configuration) -| Binding | Type | Description | -|---|---|---| -| `DATABASE` | `@deco/postgres` | PostgreSQL database for conversations and messages | -| `EVENT_BUS` | `@deco/event-bus` | Receives `slack.message.*` and `discord.message.created` events | -| `CONNECTION` | `@deco/connection` | Mesh connections to Slack, Discord and Gmail MCPs | -| `MODEL_PROVIDER` | `@deco/llm` | (Optional) LLM for AI classification, summarization and reply suggestions | -| `LANGUAGE_MODEL` | Language model | (Optional) Specific model to use for AI features | -| `GMAIL_POLL_INTERVAL_MINUTES` | number | (Optional) Gmail poll interval in minutes (default: 3) | +| Binding | Type | Description | +| ----------------------------- | ------------------ | ------------------------------------------------------------------------- | +| `DATABASE` | `@deco/postgres` | PostgreSQL database for conversations and messages | +| `EVENT_BUS` | `@deco/event-bus` | Receives `slack.message.*` and `discord.message.created` events | +| `CONNECTION` | `@deco/connection` | Mesh connections to Slack, Discord and Gmail MCPs | +| `MODEL_PROVIDER` | `@deco/llm` | (Optional) LLM for AI classification, summarization and reply suggestions | +| `LANGUAGE_MODEL` | Language model | (Optional) Specific model to use for AI features | +| `GMAIL_POLL_INTERVAL_MINUTES` | number | (Optional) Gmail poll interval in minutes (default: 3) | ## MCP Tools ### Source Management + - `inbox_add_source` — Add a Slack channel, Discord channel or Gmail label to monitor - `inbox_list_sources` — List all configured sources - `inbox_remove_source` — Disable a source (soft delete) ### Conversations + - `inbox_list_conversations` — List with filters (status, priority, source type, search) and pagination - `inbox_get_conversation` — Get conversation detail with all messages - `inbox_update_conversation` — Update status, priority, assignee, category, tags @@ -62,10 +64,12 @@ The `inbox_resolve_conversation` tool marks a conversation as resolved and, for - `inbox_stats` — Counts by source, status and priority ### Actions + - `inbox_reply` — Reply through the original platform via Mesh - `inbox_resolve_conversation` — Mark as resolved + apply forum tags (Discord) ### AI (requires MODEL_PROVIDER) + - `inbox_classify` — Auto-classify category and priority - `inbox_summarize` — Summarize conversation - `inbox_suggest_reply` — Generate reply suggestion @@ -105,16 +109,16 @@ bun run dev This starts the API server on port 3001 with hot reload and the Vite web build in watch mode. -| Command | Description | -|---|---| -| `bun run dev` | API server + web build (watch mode) | -| `bun run dev:api` | API server only (port 3001) | -| `bun run dev:web` | Web build only (watch mode) | -| `bun run build` | Production build (web + server) | -| `bun run check` | TypeScript type check | -| `bun run ci:check` | Biome lint + format (CI) | -| `bun run fmt` | Auto-format with Biome | -| `bun test` | Run tests | +| Command | Description | +| ------------------ | ----------------------------------- | +| `bun run dev` | API server + web build (watch mode) | +| `bun run dev:api` | API server only (port 3001) | +| `bun run dev:web` | Web build only (watch mode) | +| `bun run build` | Production build (web + server) | +| `bun run check` | TypeScript type check | +| `bun run ci:check` | Biome lint + format (CI) | +| `bun run fmt` | Auto-format with Biome | +| `bun test` | Run tests | The MCP endpoint is exposed at `http://localhost:3001/api/mcp` (SSE transport). diff --git a/multi-channel-inbox/api/ingestion/gmail.ts b/multi-channel-inbox/api/ingestion/gmail.ts index 36efff21..2f8bbc11 100644 --- a/multi-channel-inbox/api/ingestion/gmail.ts +++ b/multi-channel-inbox/api/ingestion/gmail.ts @@ -122,7 +122,10 @@ async function pollGmailSource( env, source.connection_id, "gmail_search_messages", - { q: sinceQuery, maxResults: 50 }, + { + q: sinceQuery, + maxResults: 50, + }, ); let messages: GmailMessage[]; @@ -159,7 +162,9 @@ async function pollGmailSource( env, source.connection_id, "gmail_get_message", - { messageId: msg.id }, + { + messageId: msg.id, + }, ); fullMsg = parseJsonFromResult(detailResult); } catch { diff --git a/multi-channel-inbox/app.json b/multi-channel-inbox/app.json index 460de224..21d18336 100644 --- a/multi-channel-inbox/app.json +++ b/multi-channel-inbox/app.json @@ -1,18 +1,18 @@ { - "scopeName": "deco", - "name": "multi-channel-inbox", - "friendlyName": "Multi Channel Inbox", - "description": "Unified support inbox for Slack, Discord and Gmail", - "icon": "https://assets.decocache.com/decocms/9c93cffc-7e66-4761-8124-31a70ddd4463/Gemini_Generated_Image_4tz2a94tz2a94tz2.png", - "connection": { - "type": "HTTP", - "url": "https://sites-multi-channel-inbox.decocache.com/api/mcp" - }, - "unlisted": true, - "metadata": { - "categories": [], - "official": false, - "tags": ["template", "mcp-app"], - "short_description": "Unified support inbox for Slack, Discord and Gmail" - } + "scopeName": "deco", + "name": "multi-channel-inbox", + "friendlyName": "Multi Channel Inbox", + "description": "Unified support inbox for Slack, Discord and Gmail", + "icon": "https://assets.decocache.com/decocms/9c93cffc-7e66-4761-8124-31a70ddd4463/Gemini_Generated_Image_4tz2a94tz2a94tz2.png", + "connection": { + "type": "HTTP", + "url": "https://sites-multi-channel-inbox.decocache.com/api/mcp" + }, + "unlisted": true, + "metadata": { + "categories": [], + "official": false, + "tags": ["template", "mcp-app"], + "short_description": "Unified support inbox for Slack, Discord and Gmail" + } } diff --git a/multi-channel-inbox/biome.json b/multi-channel-inbox/biome.json index ece29dbc..7526bc04 100644 --- a/multi-channel-inbox/biome.json +++ b/multi-channel-inbox/biome.json @@ -1,80 +1,80 @@ { - "$schema": "https://biomejs.dev/schemas/2.4.5/schema.json", - "vcs": { - "enabled": true, - "clientKind": "git", - "useIgnoreFile": true - }, - "files": { - "ignoreUnknown": true, - "includes": ["**", "!**/styles.css", "!**/globals.css"] - }, - "formatter": { - "enabled": true, - "indentStyle": "tab" - }, - "linter": { - "enabled": true, - "rules": { - "recommended": true, - "correctness": { - "useImportExtensions": "error", - "useHookAtTopLevel": "error" - } - } - }, - "overrides": [ - { - "includes": ["**/*.test.ts"], - "linter": { - "rules": { - "suspicious": { - "noExplicitAny": "off" - } - } - } - }, - { - "includes": ["web/components/ui/**", "web/hooks/**"], - "linter": { - "rules": { - "a11y": { - "useFocusableInteractive": "off", - "useSemanticElements": "off", - "noRedundantRoles": "off", - "useAriaPropsForRole": "off", - "useKeyWithClickEvents": "off" - }, - "suspicious": { - "noArrayIndexKey": "off", - "noDocumentCookie": "off", - "noDoubleEquals": "off" - }, - "correctness": { - "useExhaustiveDependencies": "off", - "useImportExtensions": "off" - }, - "style": { - "useImportType": "off" - }, - "security": { - "noDangerouslySetInnerHtml": "off" - } - } - } - } - ], - "javascript": { - "formatter": { - "quoteStyle": "double" - } - }, - "assist": { - "enabled": true, - "actions": { - "source": { - "organizeImports": "on" - } - } - } + "$schema": "https://biomejs.dev/schemas/2.4.5/schema.json", + "vcs": { + "enabled": true, + "clientKind": "git", + "useIgnoreFile": true + }, + "files": { + "ignoreUnknown": true, + "includes": ["**", "!**/styles.css", "!**/globals.css"] + }, + "formatter": { + "enabled": true, + "indentStyle": "tab" + }, + "linter": { + "enabled": true, + "rules": { + "recommended": true, + "correctness": { + "useImportExtensions": "error", + "useHookAtTopLevel": "error" + } + } + }, + "overrides": [ + { + "includes": ["**/*.test.ts"], + "linter": { + "rules": { + "suspicious": { + "noExplicitAny": "off" + } + } + } + }, + { + "includes": ["web/components/ui/**", "web/hooks/**"], + "linter": { + "rules": { + "a11y": { + "useFocusableInteractive": "off", + "useSemanticElements": "off", + "noRedundantRoles": "off", + "useAriaPropsForRole": "off", + "useKeyWithClickEvents": "off" + }, + "suspicious": { + "noArrayIndexKey": "off", + "noDocumentCookie": "off", + "noDoubleEquals": "off" + }, + "correctness": { + "useExhaustiveDependencies": "off", + "useImportExtensions": "off" + }, + "style": { + "useImportType": "off" + }, + "security": { + "noDangerouslySetInnerHtml": "off" + } + } + } + } + ], + "javascript": { + "formatter": { + "quoteStyle": "double" + } + }, + "assist": { + "enabled": true, + "actions": { + "source": { + "organizeImports": "on" + } + } + } } diff --git a/multi-channel-inbox/components.json b/multi-channel-inbox/components.json index eadc9bfa..edd620e4 100644 --- a/multi-channel-inbox/components.json +++ b/multi-channel-inbox/components.json @@ -1,23 +1,23 @@ { - "$schema": "https://ui.shadcn.com/schema.json", - "style": "new-york", - "rsc": false, - "tsx": true, - "tailwind": { - "config": "", - "css": "web/globals.css", - "baseColor": "neutral", - "cssVariables": true, - "prefix": "" - }, - "iconLibrary": "lucide", - "rtl": false, - "aliases": { - "components": "@/components", - "utils": "@/lib/utils", - "ui": "@/components/ui", - "lib": "@/lib", - "hooks": "@/hooks" - }, - "registries": {} + "$schema": "https://ui.shadcn.com/schema.json", + "style": "new-york", + "rsc": false, + "tsx": true, + "tailwind": { + "config": "", + "css": "web/globals.css", + "baseColor": "neutral", + "cssVariables": true, + "prefix": "" + }, + "iconLibrary": "lucide", + "rtl": false, + "aliases": { + "components": "@/components", + "utils": "@/lib/utils", + "ui": "@/components/ui", + "lib": "@/lib", + "hooks": "@/hooks" + }, + "registries": {} } diff --git a/multi-channel-inbox/conductor.json b/multi-channel-inbox/conductor.json index 9d2ef197..1772c40a 100644 --- a/multi-channel-inbox/conductor.json +++ b/multi-channel-inbox/conductor.json @@ -1,7 +1,7 @@ { - "scripts": { - "setup": "bun install", - "run": "bun run dev:conductor", - "archive": "rm -rf node_modules" - } + "scripts": { + "setup": "bun install", + "run": "bun run dev:conductor", + "archive": "rm -rf node_modules" + } } diff --git a/multi-channel-inbox/index.html b/multi-channel-inbox/index.html index bbcf8837..fd11ca17 100644 --- a/multi-channel-inbox/index.html +++ b/multi-channel-inbox/index.html @@ -1,12 +1,12 @@ - - - - MCP App - - -
- - + + + + MCP App + + +
+ + diff --git a/multi-channel-inbox/package.json b/multi-channel-inbox/package.json index fe3c80e7..aa8f9f2f 100644 --- a/multi-channel-inbox/package.json +++ b/multi-channel-inbox/package.json @@ -1,68 +1,68 @@ { - "name": "multi-channel-inbox", - "version": "0.1.0", - "description": "Unified support inbox for Slack, Discord and Gmail", - "private": true, - "type": "module", - "scripts": { - "dev": "concurrently -k \"bun run dev:api\" \"bun run dev:web\"", - "dev:api": "bun run --hot api/main.ts", - "dev:web": "vite build --watch", - "build:web": "vite build", - "build:server": "NODE_ENV=production bun build api/main.ts --target=bun --minify --outfile=dist/server/main.js", - "build": "bun run build:web && bun run build:server", - "ci:check": "bunx biome ci .", - "check": "tsc --noEmit", - "test": "bun test", - "fmt": "biome format --write .", - "lint": "biome lint --write .", - "dev:worktree": "bun run scripts/dev-worktree.ts", - "dev:conductor": "WORKTREE_SLUG=$CONDUCTOR_WORKSPACE_NAME bun run dev:worktree" - }, - "dependencies": { - "@base-ui/react": "^1.2.0", - "@decocms/runtime": "^1.2.10", - "@hookform/resolvers": "^5.2.2", - "@modelcontextprotocol/ext-apps": "^1.1.2", - "@modelcontextprotocol/sdk": "^1.26.0", - "@tailwindcss/vite": "^4.2.1", - "@tanstack/history": "^1.133.5", - "@tanstack/react-query": "^5.90.21", - "@tanstack/react-router": "^1.133.5", - "class-variance-authority": "^0.7.1", - "clsx": "^2.1.1", - "cmdk": "^1.1.1", - "date-fns": "^4.1.0", - "embla-carousel-react": "^8.6.0", - "input-otp": "^1.4.2", - "lucide-react": "^0.576.0", - "next-themes": "^0.4.6", - "radix-ui": "^1.4.3", - "react": "^19.2.0", - "react-day-picker": "^9.14.0", - "react-dom": "^19.2.0", - "react-hook-form": "^7.71.2", - "react-resizable-panels": "^4", - "recharts": "2.15.4", - "sonner": "^2.0.7", - "tailwind-merge": "^3.5.0", - "tailwindcss": "^4.2.1", - "vaul": "^1.1.2", - "@decocms/mcps-shared": "workspace:*", - "zod": "^4.3.6" - }, - "devDependencies": { - "@biomejs/biome": "^2.3.14", - "@types/bun": "^1.3.8", - "@types/react": "^19.2.2", - "@types/react-dom": "^19.2.2", - "@vitejs/plugin-react": "^5.0.4", - "babel-plugin-react-compiler": "^1.0.0", - "concurrently": "^9.2.1", - "tw-animate-css": "^1.4.0", - "typescript": "^5.9.3", - "vite": "^7.3.1", - "vite-plugin-singlefile": "^2.3.0", - "worktree-devservers": "0.3.1" - } + "name": "multi-channel-inbox", + "version": "0.1.0", + "private": true, + "description": "Unified support inbox for Slack, Discord and Gmail", + "type": "module", + "scripts": { + "dev": "concurrently -k \"bun run dev:api\" \"bun run dev:web\"", + "dev:api": "bun run --hot api/main.ts", + "dev:web": "vite build --watch", + "build:web": "vite build", + "build:server": "NODE_ENV=production bun build api/main.ts --target=bun --minify --outfile=dist/server/main.js", + "build": "bun run build:web && bun run build:server", + "ci:check": "bunx biome ci .", + "check": "tsc --noEmit", + "test": "bun test", + "fmt": "biome format --write .", + "lint": "biome lint --write .", + "dev:worktree": "bun run scripts/dev-worktree.ts", + "dev:conductor": "WORKTREE_SLUG=$CONDUCTOR_WORKSPACE_NAME bun run dev:worktree" + }, + "dependencies": { + "@base-ui/react": "^1.2.0", + "@decocms/mcps-shared": "workspace:*", + "@decocms/runtime": "^1.2.10", + "@hookform/resolvers": "^5.2.2", + "@modelcontextprotocol/ext-apps": "^1.1.2", + "@modelcontextprotocol/sdk": "^1.26.0", + "@tailwindcss/vite": "^4.2.1", + "@tanstack/history": "^1.133.5", + "@tanstack/react-query": "^5.90.21", + "@tanstack/react-router": "^1.133.5", + "class-variance-authority": "^0.7.1", + "clsx": "^2.1.1", + "cmdk": "^1.1.1", + "date-fns": "^4.1.0", + "embla-carousel-react": "^8.6.0", + "input-otp": "^1.4.2", + "lucide-react": "^0.576.0", + "next-themes": "^0.4.6", + "radix-ui": "^1.4.3", + "react": "^19.2.0", + "react-day-picker": "^9.14.0", + "react-dom": "^19.2.0", + "react-hook-form": "^7.71.2", + "react-resizable-panels": "^4", + "recharts": "2.15.4", + "sonner": "^2.0.7", + "tailwind-merge": "^3.5.0", + "tailwindcss": "^4.2.1", + "vaul": "^1.1.2", + "zod": "^4.3.6" + }, + "devDependencies": { + "@biomejs/biome": "^2.3.14", + "@types/bun": "^1.3.8", + "@types/react": "^19.2.2", + "@types/react-dom": "^19.2.2", + "@vitejs/plugin-react": "^5.0.4", + "babel-plugin-react-compiler": "^1.0.0", + "concurrently": "^9.2.1", + "tw-animate-css": "^1.4.0", + "typescript": "^5.9.3", + "vite": "^7.3.1", + "vite-plugin-singlefile": "^2.3.0", + "worktree-devservers": "0.3.1" + } } diff --git a/multi-channel-inbox/tsconfig.json b/multi-channel-inbox/tsconfig.json index 2c7ab96e..dd7c562a 100644 --- a/multi-channel-inbox/tsconfig.json +++ b/multi-channel-inbox/tsconfig.json @@ -1,24 +1,24 @@ { - "compilerOptions": { - "target": "ESNext", - "module": "ESNext", - "moduleResolution": "bundler", - "types": ["bun", "vite/client"], - "lib": ["ESNext", "DOM"], - "jsx": "react-jsx", - "strict": true, - "esModuleInterop": true, - "skipLibCheck": true, - "forceConsistentCasingInFileNames": true, - "resolveJsonModule": true, - "isolatedModules": true, - "allowImportingTsExtensions": true, - "noEmit": true, - "baseUrl": ".", - "paths": { - "@/*": ["./web/*"] - } - }, - "include": ["api/**/*", "web/**/*"], - "exclude": ["node_modules", "dist"] + "compilerOptions": { + "target": "ESNext", + "module": "ESNext", + "moduleResolution": "bundler", + "types": ["bun", "vite/client"], + "lib": ["ESNext", "DOM"], + "jsx": "react-jsx", + "strict": true, + "esModuleInterop": true, + "skipLibCheck": true, + "forceConsistentCasingInFileNames": true, + "resolveJsonModule": true, + "isolatedModules": true, + "allowImportingTsExtensions": true, + "noEmit": true, + "baseUrl": ".", + "paths": { + "@/*": ["./web/*"] + } + }, + "include": ["api/**/*", "web/**/*"], + "exclude": ["node_modules", "dist"] } diff --git a/multi-channel-inbox/web/globals.css b/multi-channel-inbox/web/globals.css index 5fcbbe77..ced07d3b 100644 --- a/multi-channel-inbox/web/globals.css +++ b/multi-channel-inbox/web/globals.css @@ -3,107 +3,107 @@ @custom-variant dark (&:is([data-theme="dark"] *)); :root { - /* Host variable fallbacks (light) */ - --color-background-primary: hsl(0 0% 100%); - --color-background-secondary: hsl(240 4.8% 95.9%); - --color-background-tertiary: hsl(240 4.8% 95.9%); - --color-background-danger: hsl(0 84.2% 60.2%); - --color-text-primary: hsl(240 10% 3.9%); - --color-text-secondary: hsl(240 3.8% 46.1%); - --color-border-primary: hsl(240 5.9% 90%); - --color-border-secondary: hsl(240 5.9% 90%); - --color-ring-primary: hsl(240 5.9% 10%); + /* Host variable fallbacks (light) */ + --color-background-primary: hsl(0 0% 100%); + --color-background-secondary: hsl(240 4.8% 95.9%); + --color-background-tertiary: hsl(240 4.8% 95.9%); + --color-background-danger: hsl(0 84.2% 60.2%); + --color-text-primary: hsl(240 10% 3.9%); + --color-text-secondary: hsl(240 3.8% 46.1%); + --color-border-primary: hsl(240 5.9% 90%); + --color-border-secondary: hsl(240 5.9% 90%); + --color-ring-primary: hsl(240 5.9% 10%); - /* Primary color (no host equivalent) */ - --primary: hsl(240 5.9% 10%); - --primary-foreground: hsl(0 0% 98%); + /* Primary color (no host equivalent) */ + --primary: hsl(240 5.9% 10%); + --primary-foreground: hsl(0 0% 98%); - /* Sidebar tokens (existing) */ - --sidebar: hsl(0 0% 98%); - --sidebar-foreground: hsl(240 5.3% 26.1%); - --sidebar-primary: hsl(240 5.9% 10%); - --sidebar-primary-foreground: hsl(0 0% 98%); - --sidebar-accent: hsl(240 4.8% 95.9%); - --sidebar-accent-foreground: hsl(240 5.9% 10%); - --sidebar-border: hsl(220 13% 91%); - --sidebar-ring: hsl(217.2 91.2% 59.8%); + /* Sidebar tokens (existing) */ + --sidebar: hsl(0 0% 98%); + --sidebar-foreground: hsl(240 5.3% 26.1%); + --sidebar-primary: hsl(240 5.9% 10%); + --sidebar-primary-foreground: hsl(0 0% 98%); + --sidebar-accent: hsl(240 4.8% 95.9%); + --sidebar-accent-foreground: hsl(240 5.9% 10%); + --sidebar-border: hsl(220 13% 91%); + --sidebar-ring: hsl(217.2 91.2% 59.8%); } [data-theme="dark"] { - /* Host variable fallbacks (dark) */ - --color-background-primary: hsl(240 10% 3.9%); - --color-background-secondary: hsl(240 3.7% 15.9%); - --color-background-tertiary: hsl(240 3.7% 15.9%); - --color-background-danger: hsl(0 62.8% 30.6%); - --color-text-primary: hsl(0 0% 98%); - --color-text-secondary: hsl(240 5% 64.9%); - --color-border-primary: hsl(240 3.7% 15.9%); - --color-border-secondary: hsl(240 3.7% 15.9%); - --color-ring-primary: hsl(240 4.9% 83.9%); + /* Host variable fallbacks (dark) */ + --color-background-primary: hsl(240 10% 3.9%); + --color-background-secondary: hsl(240 3.7% 15.9%); + --color-background-tertiary: hsl(240 3.7% 15.9%); + --color-background-danger: hsl(0 62.8% 30.6%); + --color-text-primary: hsl(0 0% 98%); + --color-text-secondary: hsl(240 5% 64.9%); + --color-border-primary: hsl(240 3.7% 15.9%); + --color-border-secondary: hsl(240 3.7% 15.9%); + --color-ring-primary: hsl(240 4.9% 83.9%); - /* Primary color (no host equivalent) */ - --primary: hsl(0 0% 98%); - --primary-foreground: hsl(240 5.9% 10%); + /* Primary color (no host equivalent) */ + --primary: hsl(0 0% 98%); + --primary-foreground: hsl(240 5.9% 10%); - /* Sidebar tokens (existing) */ - --sidebar: hsl(240 5.9% 10%); - --sidebar-foreground: hsl(240 4.8% 95.9%); - --sidebar-primary: hsl(224.3 76.3% 48%); - --sidebar-primary-foreground: hsl(0 0% 100%); - --sidebar-accent: hsl(240 3.7% 15.9%); - --sidebar-accent-foreground: hsl(240 4.8% 95.9%); - --sidebar-border: hsl(240 3.7% 15.9%); - --sidebar-ring: hsl(217.2 91.2% 59.8%); + /* Sidebar tokens (existing) */ + --sidebar: hsl(240 5.9% 10%); + --sidebar-foreground: hsl(240 4.8% 95.9%); + --sidebar-primary: hsl(224.3 76.3% 48%); + --sidebar-primary-foreground: hsl(0 0% 100%); + --sidebar-accent: hsl(240 3.7% 15.9%); + --sidebar-accent-foreground: hsl(240 4.8% 95.9%); + --sidebar-border: hsl(240 3.7% 15.9%); + --sidebar-ring: hsl(217.2 91.2% 59.8%); } @layer base { - button, - a, - select, - summary, - [role="button"], - [role="tab"], - [role="checkbox"], - [role="radio"], - [role="switch"], - [role="menuitem"], - [role="option"], - label[for] { - cursor: pointer; - } + button, + a, + select, + summary, + [role="button"], + [role="tab"], + [role="checkbox"], + [role="radio"], + [role="switch"], + [role="menuitem"], + [role="option"], + label[for] { + cursor: pointer; + } } @theme inline { - /* Core colors — mapped from host variables */ - --color-background: var(--color-background-primary); - --color-foreground: var(--color-text-primary); - --color-card: var(--color-background-primary); - --color-card-foreground: var(--color-text-primary); - --color-popover: var(--color-background-primary); - --color-popover-foreground: var(--color-text-primary); - --color-primary: var(--primary); - --color-primary-foreground: var(--primary-foreground); - --color-secondary: var(--color-background-secondary); - --color-secondary-foreground: var(--color-text-primary); - --color-muted: var(--color-background-secondary); - --color-muted-foreground: var(--color-text-secondary); - --color-accent: var(--color-background-tertiary); - --color-accent-foreground: var(--color-text-primary); - --color-destructive: var(--color-background-danger); - --color-border: var(--color-border-primary); - --color-input: var(--color-border-secondary); - --color-ring: var(--color-ring-primary); + /* Core colors — mapped from host variables */ + --color-background: var(--color-background-primary); + --color-foreground: var(--color-text-primary); + --color-card: var(--color-background-primary); + --color-card-foreground: var(--color-text-primary); + --color-popover: var(--color-background-primary); + --color-popover-foreground: var(--color-text-primary); + --color-primary: var(--primary); + --color-primary-foreground: var(--primary-foreground); + --color-secondary: var(--color-background-secondary); + --color-secondary-foreground: var(--color-text-primary); + --color-muted: var(--color-background-secondary); + --color-muted-foreground: var(--color-text-secondary); + --color-accent: var(--color-background-tertiary); + --color-accent-foreground: var(--color-text-primary); + --color-destructive: var(--color-background-danger); + --color-border: var(--color-border-primary); + --color-input: var(--color-border-secondary); + --color-ring: var(--color-ring-primary); - /* Layout */ - --radius-DEFAULT: var(--border-radius-md, 0.375rem); + /* Layout */ + --radius-DEFAULT: var(--border-radius-md, 0.375rem); - /* Sidebar (existing) */ - --color-sidebar: var(--sidebar); - --color-sidebar-foreground: var(--sidebar-foreground); - --color-sidebar-primary: var(--sidebar-primary); - --color-sidebar-primary-foreground: var(--sidebar-primary-foreground); - --color-sidebar-accent: var(--sidebar-accent); - --color-sidebar-accent-foreground: var(--sidebar-accent-foreground); - --color-sidebar-border: var(--sidebar-border); - --color-sidebar-ring: var(--sidebar-ring); + /* Sidebar (existing) */ + --color-sidebar: var(--sidebar); + --color-sidebar-foreground: var(--sidebar-foreground); + --color-sidebar-primary: var(--sidebar-primary); + --color-sidebar-primary-foreground: var(--sidebar-primary-foreground); + --color-sidebar-accent: var(--sidebar-accent); + --color-sidebar-accent-foreground: var(--sidebar-accent-foreground); + --color-sidebar-border: var(--sidebar-border); + --color-sidebar-ring: var(--sidebar-ring); } diff --git a/mux-api/README.md b/mux-api/README.md index 64448fa7..8f5130f3 100644 --- a/mux-api/README.md +++ b/mux-api/README.md @@ -77,24 +77,24 @@ You can use any JWT-compatible library, but we've included some light helpers in // Signing token secret: process.env.MUX_PRIVATE_KEY // Most simple request, defaults to type video and is valid for 7 days. -const token = mux.jwt.signPlaybackId('some-playback-id'); +const token = mux.jwt.signPlaybackId("some-playback-id"); // https://stream.mux.com/some-playback-id.m3u8?token=${token} // If you wanted to sign a thumbnail const thumbParams = { time: 14, width: 100 }; -const thumbToken = mux.jwt.signPlaybackId('some-playback-id', { - type: 'thumbnail', +const thumbToken = mux.jwt.signPlaybackId("some-playback-id", { + type: "thumbnail", params: thumbParams, }); // https://image.mux.com/some-playback-id/thumbnail.jpg?token=${token} // If you wanted to sign a gif -const gifToken = mux.jwt.signPlaybackId('some-playback-id', { type: 'gif' }); +const gifToken = mux.jwt.signPlaybackId("some-playback-id", { type: "gif" }); // https://image.mux.com/some-playback-id/animated.gif?token=${token} // Here's an example for a storyboard -const storyboardToken = mux.jwt.signPlaybackId('some-playback-id', { - type: 'storyboard', +const storyboardToken = mux.jwt.signPlaybackId("some-playback-id", { + type: "storyboard", }); // https://image.mux.com/some-playback-id/storyboard.jpg?token=${token} @@ -102,15 +102,17 @@ const storyboardToken = mux.jwt.signPlaybackId('some-playback-id', { // You can also use `signViewerCounts` to get a token // used for requests to the Mux Engagement Counts API // https://docs.mux.com/guides/see-how-many-people-are-watching -const statsToken = mux.jwt.signViewerCounts('some-live-stream-id', { - type: 'live_stream', +const statsToken = mux.jwt.signViewerCounts("some-live-stream-id", { + type: "live_stream", }); // https://stats.mux.com/counts?token={statsToken} ``` ### Signing multiple JWTs at once + In cases you need multiple tokens, like when using Mux Player, things can get unwieldy pretty quickly. For example, + ```tsx const playbackToken = await mux.jwt.signPlaybackId(id, { expiration: "1d", @@ -139,6 +141,7 @@ const drmToken = await mux.jwt.signPlaybackId(id, { ``` To simplify this use-case, you can provide multiple types to `signPlaybackId` to recieve multiple tokens. These tokens are provided in a format that Mux Player can take as props: + ```tsx // { "playback-token", "thumbnail-token", "storyboard-token", "drm-token" } const tokens = await mux.jwt.signPlaybackId(id, { @@ -153,11 +156,12 @@ const tokens = await mux.jwt.signPlaybackId(id, { ``` If you would like to provide params to a single token (e.g., if you would like to have a thumbnail `time`), you can provide `[type, typeParams]` instead of `type`: + ```tsx const tokens = await mux.jwt.signPlaybackId(id, { expiration: "1d", - type: ["playback", ["thumbnail", { time: 2 }], "storyboard", "drm_license"] -}) + type: ["playback", ["thumbnail", { time: 2 }], "storyboard", "drm_license"], +}); ``` ## Parsing Webhook payloads @@ -243,10 +247,10 @@ mux.webhooks.verifySignature(body, headers, secret); Note that when passing in the payload (body) you want to pass in the raw un-parsed request body, not the parsed JSON. Here's an example if you are using express. ```js -const Mux = require('@mux/mux-node'); +const Mux = require("@mux/mux-node"); const mux = new Mux(); -const express = require('express'); -const bodyParser = require('body-parser'); +const express = require("express"); +const bodyParser = require("body-parser"); /** * You'll need to make sure this is externally accessible. ngrok (https://ngrok.com/) @@ -256,11 +260,11 @@ const bodyParser = require('body-parser'); const webhookSecret = process.env.WEBHOOK_SECRET; const app = express(); -app.post('/webhooks', bodyParser.raw({ type: 'application/json' }), async (req, res) => { +app.post("/webhooks", bodyParser.raw({ type: "application/json" }), async (req, res) => { try { // will raise an exception if the signature is invalid const isValidSignature = mux.webhooks.verifySignature(req.body, req.headers, webhookSecret); - console.log('Success:', isValidSignature); + console.log("Success:", isValidSignature); // convert the raw req.body to JSON, which is originally Buffer (raw) const jsonFormattedBody = JSON.parse(req.body); // await doSomething(); @@ -272,7 +276,7 @@ app.post('/webhooks', bodyParser.raw({ type: 'application/json' }), async (req, }); app.listen(3000, () => { - console.log('Example app listening on port 3000!'); + console.log("Example app listening on port 3000!"); }); ``` @@ -425,9 +429,9 @@ To make requests to undocumented endpoints, you can use `client.get`, `client.po Options on the client, such as retries, will be respected when making these requests. ```ts -await client.post('/some/path', { - body: { some_prop: 'foo' }, - query: { some_query_arg: 'bar' }, +await client.post("/some/path", { + body: { some_prop: "foo" }, + query: { some_query_arg: "bar" }, }); ``` @@ -439,10 +443,10 @@ send will be sent as-is. ```ts client.foo.create({ - foo: 'my_param', + foo: "my_param", bar: 12, // @ts-expect-error baz is not yet public - baz: 'undocumented option', + baz: "undocumented option", }); ``` @@ -469,8 +473,8 @@ add the following import before your first import `from "Mux"`: ```ts // Tell TypeScript and the package to use the global web fetch instead of node-fetch. // Note, despite the name, this does not add any polyfills, but expects them to be provided if needed. -import '@mux/mux-node/shims/web'; -import Mux from '@mux/mux-node'; +import "@mux/mux-node/shims/web"; +import Mux from "@mux/mux-node"; ``` To do the inverse, add `import "@mux/mux-node/shims/node"` (which does import polyfills). @@ -482,14 +486,14 @@ You may also provide a custom `fetch` function when instantiating the client, which can be used to inspect or alter the `Request` or `Response` before/after each request: ```ts -import { fetch } from 'undici'; // as one example -import Mux from '@mux/mux-node'; +import { fetch } from "undici"; // as one example +import Mux from "@mux/mux-node"; const client = new Mux({ fetch: async (url: RequestInfo, init?: RequestInit): Promise => { - console.log('About to make a request', url, init); + console.log("About to make a request", url, init); const response = await fetch(url, init); - console.log('Got response', response); + console.log("Got response", response); return response; }, }); diff --git a/nanobanana/README.md b/nanobanana/README.md index 56072398..6deb5c05 100644 --- a/nanobanana/README.md +++ b/nanobanana/README.md @@ -1,4 +1,4 @@ -# Nano Banana MCP +# Nano Banana MCP ## Description @@ -51,11 +51,11 @@ bun run check The MCP requires the following configuration: -| Field | Type | Description | -|-------|------|-------------| -| `NANOBANANA_CONTRACT` | Binding | Contract binding for authorization and billing | -| `FILE_SYSTEM` | Binding | File system binding for storing generated images | -| `NANOBANANA_API_KEY` | string | OpenRouter API key for accessing Gemini models | +| Field | Type | Description | +| --------------------- | ------- | ------------------------------------------------ | +| `NANOBANANA_CONTRACT` | Binding | Contract binding for authorization and billing | +| `FILE_SYSTEM` | Binding | File system binding for storing generated images | +| `NANOBANANA_API_KEY` | string | OpenRouter API key for accessing Gemini models | ## Tools @@ -65,21 +65,21 @@ Generate an image using Gemini models via OpenRouter. **Input:** -| Parameter | Type | Required | Description | -|-----------|------|----------|-------------| -| `prompt` | string | ✅ | Text description of the image to generate | -| `baseImageUrl` | string | ❌ | URL of an existing image for image-to-image generation (single image) | -| `baseImageUrls` | string[] | ❌ | Array of image URLs for multi-image generation (e.g., virtual try-on). Takes precedence over `baseImageUrl` | -| `aspectRatio` | enum | ❌ | Output aspect ratio (1:1, 2:3, 3:2, 3:4, 4:3, 4:5, 5:4, 9:16, 16:9, 21:9) | -| `model` | enum | ❌ | Model to use (gemini-2.0-flash-exp, gemini-2.5-pro-image-preview, gemini-2.5-pro-exp-03-25, gemini-3-pro-image-preview, gemini-3.1-flash-image-preview) | +| Parameter | Type | Required | Description | +| --------------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `prompt` | string | ✅ | Text description of the image to generate | +| `baseImageUrl` | string | ❌ | URL of an existing image for image-to-image generation (single image) | +| `baseImageUrls` | string[] | ❌ | Array of image URLs for multi-image generation (e.g., virtual try-on). Takes precedence over `baseImageUrl` | +| `aspectRatio` | enum | ❌ | Output aspect ratio (1:1, 2:3, 3:2, 3:4, 4:3, 4:5, 5:4, 9:16, 16:9, 21:9) | +| `model` | enum | ❌ | Model to use (gemini-2.0-flash-exp, gemini-2.5-pro-image-preview, gemini-2.5-pro-exp-03-25, gemini-3-pro-image-preview, gemini-3.1-flash-image-preview) | **Output:** -| Field | Type | Description | -|-------|------|-------------| -| `image` | string | URL of the generated image | -| `error` | boolean | Whether the request failed | -| `finishReason` | string | Native finish reason from the model | +| Field | Type | Description | +| -------------- | ------- | ----------------------------------- | +| `image` | string | URL of the generated image | +| `error` | boolean | Whether the request failed | +| `finishReason` | string | Native finish reason from the model | ### Examples @@ -87,7 +87,7 @@ Generate an image using Gemini models via OpenRouter. ```typescript const result = await client.callTool("GENERATE_IMAGE", { - prompt: "An orange cat sitting on a blue chair, cartoon style" + prompt: "An orange cat sitting on a blue chair, cartoon style", }); ``` @@ -96,7 +96,7 @@ const result = await client.callTool("GENERATE_IMAGE", { ```typescript const result = await client.callTool("GENERATE_IMAGE", { prompt: "Mountain landscape at sunset", - aspectRatio: "16:9" + aspectRatio: "16:9", }); ``` @@ -105,7 +105,7 @@ const result = await client.callTool("GENERATE_IMAGE", { ```typescript const result = await client.callTool("GENERATE_IMAGE", { prompt: "Add snow on the mountains", - baseImageUrl: "https://example.com/landscape.jpg" + baseImageUrl: "https://example.com/landscape.jpg", }); ``` @@ -115,10 +115,10 @@ const result = await client.callTool("GENERATE_IMAGE", { const result = await client.callTool("GENERATE_IMAGE", { prompt: "Virtual try-on: person wearing the garment from the second image", baseImageUrls: [ - "https://example.com/person.jpg", // First image: person photo - "https://example.com/t-shirt.jpg" // Second image: garment + "https://example.com/person.jpg", // First image: person photo + "https://example.com/t-shirt.jpg", // Second image: garment ], - aspectRatio: "3:4" + aspectRatio: "3:4", }); ``` @@ -127,7 +127,7 @@ const result = await client.callTool("GENERATE_IMAGE", { ```typescript const result = await client.callTool("GENERATE_IMAGE", { prompt: "A futuristic city", - model: "gemini-2.5-pro-exp-03-25" + model: "gemini-2.5-pro-exp-03-25", }); ``` @@ -153,13 +153,13 @@ nanobanana/ ## Supported Models -| Model | Description | -|-------|-------------| -| `gemini-2.0-flash-exp` | Gemini 2.0 Flash experimental with image generation | -| `gemini-2.5-pro-image-preview` | Gemini 2.5 Pro optimized for image generation | -| `gemini-3-pro-image-preview` | Gemini 3 Pro with advanced image generation | -| `gemini-3.1-flash-image-preview` | **Gemini 3.1 Flash for image generation (default)** ✅ | -| `gemini-2.5-pro-exp-03-25` | Gemini 2.5 Pro experimental with enhanced image quality | +| Model | Description | +| -------------------------------- | ------------------------------------------------------- | +| `gemini-2.0-flash-exp` | Gemini 2.0 Flash experimental with image generation | +| `gemini-2.5-pro-image-preview` | Gemini 2.5 Pro optimized for image generation | +| `gemini-3-pro-image-preview` | Gemini 3 Pro with advanced image generation | +| `gemini-3.1-flash-image-preview` | **Gemini 3.1 Flash for image generation (default)** ✅ | +| `gemini-2.5-pro-exp-03-25` | Gemini 2.5 Pro experimental with enhanced image quality | ## Technologies diff --git a/nanobanana/app.json b/nanobanana/app.json index 186c0910..bb5cc85e 100644 --- a/nanobanana/app.json +++ b/nanobanana/app.json @@ -15,9 +15,16 @@ "metadata": { "categories": ["AI"], "official": false, - "tags": ["image-generation", "gemini", "ai", "text-to-image", "image-to-image", "openrouter", "nanobanana"], + "tags": [ + "image-generation", + "gemini", + "ai", + "text-to-image", + "image-to-image", + "openrouter", + "nanobanana" + ], "short_description": "Generate images using AI with Gemini models via OpenRouter", "mesh_description": "The Nano Banana MCP provides AI-powered image generation using Google Gemini models through OpenRouter. This MCP enables AI agents to generate images from text prompts, modify existing images with text instructions, and control output aspect ratios. **Key Features** - Text-to-image generation with detailed prompts. **Image Editing** - Modify existing images using natural language instructions. **Multiple Models** - Support for Gemini 2.0 Flash, 2.5 Flash, and 2.5 Pro image generation models. **Aspect Ratios** - Flexible output dimensions including 1:1, 16:9, 9:16, and more. **Contract Management** - Built-in authorization and billing through the NanoBanana contract system. **File Storage** - Automatic image storage via file system binding. Perfect for creative workflows, content generation, product mockups, and any task requiring AI-generated images." } } - diff --git a/nanobanana/package.json b/nanobanana/package.json index 1e2b7a1a..b1b950bf 100644 --- a/nanobanana/package.json +++ b/nanobanana/package.json @@ -1,8 +1,8 @@ { "name": "nanobanana", "version": "1.0.0", - "description": "MCP server for image generation using Gemini models via OpenRouter", "private": true, + "description": "MCP server for image generation using Gemini models via OpenRouter", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/nanobanana/tsconfig.json b/nanobanana/tsconfig.json index 90d5b069..d1f4a7bc 100644 --- a/nanobanana/tsconfig.json +++ b/nanobanana/tsconfig.json @@ -2,9 +2,7 @@ "compilerOptions": { "target": "ES2022", "useDefineForClassFields": true, - "lib": [ - "ES2023", - ], + "lib": ["ES2023"], "module": "ESNext", "skipLibCheck": true, /* Bundler mode */ @@ -24,16 +22,10 @@ /* Path Aliases */ "baseUrl": ".", "paths": { - "server/*": [ - "./server/*" - ], + "server/*": ["./server/*"] }, /* Types */ - "types": [ - "@types/node", - ] + "types": ["@types/node"] }, - "include": [ - "server", - ] + "include": ["server"] } diff --git a/new-relic/README.md b/new-relic/README.md index c2feb1a7..c9d61210 100644 --- a/new-relic/README.md +++ b/new-relic/README.md @@ -7,6 +7,7 @@ ### Purpose This MCP server allows client applications to: + - Query application performance metrics and infrastructure data - Search and analyze logs across services and environments - Investigate distributed traces and identify bottlenecks diff --git a/notion-official/README.md b/notion-official/README.md index 067a6316..157efe3c 100644 --- a/notion-official/README.md +++ b/notion-official/README.md @@ -21,4 +21,3 @@ https://www.notion.so/my-integrations - **GitHub**: https://github.com/makenotion/notion-next - **Website**: https://www.notion.so - **Documentation**: https://developers.notion.com/ - diff --git a/notion-official/app.json b/notion-official/app.json index 7a12ef2b..4ff26d74 100644 --- a/notion-official/app.json +++ b/notion-official/app.json @@ -13,9 +13,17 @@ "metadata": { "categories": ["Productivity", "Documentation", "Collaboration"], "official": true, - "tags": ["notion", "productivity", "documentation", "database", "collaboration", "workspace", "pages", "blocks"], + "tags": [ + "notion", + "productivity", + "documentation", + "database", + "collaboration", + "workspace", + "pages", + "blocks" + ], "short_description": "Official Notion MCP - Manage Notion workspaces, pages, databases, and blocks programmatically", "mesh_description": "The Notion MCP provides comprehensive integration with Notion's workspace platform, enabling AI agents to manage knowledge bases and collaborative documents. This official MCP allows you to create, read, and update Notion pages and blocks with rich content types including text, headings, lists, images, and embeds. Manage Notion databases with full CRUD operations on records, properties, and views. Query databases with filters and sorts, collaborate on shared workspaces, and integrate Notion into automated workflows. Key features include hierarchical page management, rich content block manipulation, database views and filtering, property type management, workspace-wide search capabilities, and real-time content synchronization. Perfect for building knowledge management systems, documentation generators, project management tools, or AI agents that can maintain and organize information in Notion. The MCP supports OAuth authentication and provides natural language access to all major Notion features including pages, databases, blocks, users, and comments." } } - diff --git a/object-storage/README.md b/object-storage/README.md index 12680ddf..f6283d72 100644 --- a/object-storage/README.md +++ b/object-storage/README.md @@ -113,11 +113,13 @@ All tools require authentication and use the configured state settings. List objects in the bucket with pagination support. **Input:** + - `prefix` (optional string) - Filter objects by prefix (e.g., "folder/" for folder contents) - `maxKeys` (optional number) - Maximum number of keys to return (default: 1000) - `continuationToken` (optional string) - Token for pagination from previous response **Output:** + - `objects` (array) - Array of objects with `key`, `size`, `lastModified`, `etag` - `nextContinuationToken` (optional string) - Token for fetching next page - `isTruncated` (boolean) - Whether there are more results available @@ -127,9 +129,11 @@ List objects in the bucket with pagination support. Get metadata for an object without downloading it (HEAD operation). **Input:** + - `key` (string) - Object key/path to get metadata for **Output:** + - `contentType` (optional string) - MIME type of the object - `contentLength` (number) - Size of the object in bytes - `lastModified` (string) - Last modified timestamp @@ -141,10 +145,12 @@ Get metadata for an object without downloading it (HEAD operation). Generate a presigned URL for downloading an object. **Input:** + - `key` (string) - Object key/path to generate URL for - `expiresIn` (optional number) - URL expiration time in seconds **Output:** + - `url` (string) - Presigned URL for downloading the object - `expiresIn` (number) - Expiration time in seconds that was used @@ -153,11 +159,13 @@ Generate a presigned URL for downloading an object. Generate a presigned URL for uploading an object. **Input:** + - `key` (string) - Object key/path for the upload - `expiresIn` (optional number) - URL expiration time in seconds - `contentType` (optional string) - MIME type for the object being uploaded **Output:** + - `url` (string) - Presigned URL for uploading the object - `expiresIn` (number) - Expiration time in seconds that was used @@ -166,9 +174,11 @@ Generate a presigned URL for uploading an object. Delete a single object from the bucket. **Input:** + - `key` (string) - Object key/path to delete **Output:** + - `success` (boolean) - Whether the deletion was successful - `key` (string) - The key that was deleted @@ -177,9 +187,11 @@ Delete a single object from the bucket. Delete multiple objects in a single batch operation (max 1000 objects). **Input:** + - `keys` (array of strings) - Array of object keys/paths to delete (max 1000) **Output:** + - `deleted` (array of strings) - Array of successfully deleted keys - `errors` (array) - Array of errors for failed deletions with `key` and `message` diff --git a/object-storage/app.json b/object-storage/app.json index e78649b3..cb90ecf8 100644 --- a/object-storage/app.json +++ b/object-storage/app.json @@ -12,7 +12,18 @@ "metadata": { "categories": ["Storage"], "official": false, - "tags": ["s3", "storage", "object-storage", "aws", "cloudflare-r2", "gcs", "files", "upload", "download", "presigned-urls"], + "tags": [ + "s3", + "storage", + "object-storage", + "aws", + "cloudflare-r2", + "gcs", + "files", + "upload", + "download", + "presigned-urls" + ], "short_description": "Manage files in S3-compatible object storage like AWS S3, Cloudflare R2, and Google Cloud Storage", "mesh_description": "The Object Storage MCP provides comprehensive tools for managing files in any S3-compatible object storage service, including AWS S3, Cloudflare R2, Google Cloud Storage, MinIO, and DigitalOcean Spaces. This MCP enables AI agents to list objects in buckets with prefix filtering and pagination, retrieve object metadata without downloading file contents, and generate presigned URLs for secure uploads and downloads. The presigned URL functionality allows temporary access to private objects without exposing credentials, perfect for enabling client-side file uploads and secure file sharing. The MCP supports batch operations for efficiently deleting multiple objects in a single request. Configuration is flexible, allowing connection to any S3-compatible endpoint with custom regions and credentials. Ideal for building file management systems, content delivery workflows, backup automation, and media processing pipelines. The integration supports all standard S3 operations through a unified interface that works across multiple cloud providers." } diff --git a/object-storage/index.html b/object-storage/index.html index 8291255a..03b0dca9 100644 --- a/object-storage/index.html +++ b/object-storage/index.html @@ -1,4 +1,4 @@ - + @@ -8,29 +8,16 @@ - + - + - + React + Tailwind View diff --git a/object-storage/package.json b/object-storage/package.json index e7d9c162..42c2f826 100644 --- a/object-storage/package.json +++ b/object-storage/package.json @@ -1,8 +1,8 @@ { "name": "object-storage", "version": "1.0.0", - "description": "S3-compatible object storage MCP", "private": true, + "description": "S3-compatible object storage MCP", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/object-storage/tsconfig.json b/object-storage/tsconfig.json index 65a71cd7..d1f4a7bc 100644 --- a/object-storage/tsconfig.json +++ b/object-storage/tsconfig.json @@ -2,9 +2,7 @@ "compilerOptions": { "target": "ES2022", "useDefineForClassFields": true, - "lib": [ - "ES2023", - ], + "lib": ["ES2023"], "module": "ESNext", "skipLibCheck": true, /* Bundler mode */ @@ -24,16 +22,10 @@ /* Path Aliases */ "baseUrl": ".", "paths": { - "server/*": [ - "./server/*" - ], + "server/*": ["./server/*"] }, /* Types */ - "types": [ - "@types/node", - ] + "types": ["@types/node"] }, - "include": [ - "server", - ] -} \ No newline at end of file + "include": ["server"] +} diff --git a/openrouter/README.md b/openrouter/README.md index 70254c8c..80b08a9a 100644 --- a/openrouter/README.md +++ b/openrouter/README.md @@ -1,5 +1,5 @@ # OpenRouter MCP - + A comprehensive Model Context Protocol (MCP) server for [OpenRouter](https://openrouter.ai), providing unified access to hundreds of AI models with intelligent routing, cost optimization, and fallback mechanisms. ## Overview @@ -18,12 +18,14 @@ OpenRouter is a unified API for accessing AI models from multiple providers (Ope ### 🔧 **Tools & APIs** #### Model Discovery (4 tools) + 1. **`LIST_MODELS`** - Paginated model catalog with filters, sorting, and curated “well-known” ordering 2. **`GET_MODEL`** - Get detailed model information 3. **`COMPARE_MODELS`** - Compare multiple models side-by-side 4. **`RECOMMEND_MODEL`** - Get AI model recommendations for tasks #### AI Chat + - **`CHAT_COMPLETION`** – Non-streaming chat completions - **`GET_STREAM_ENDPOINT`** – Returns the deployed `POST /api/chat` URL plus usage instructions - **`POST /api/chat`** – Real-time streaming endpoint built with the [Vercel AI SDK](https://github.com/vercel/ai) that emits Server-Sent Events compatible with `useChat`, `streamText`, or any SSE client. Payload mirrors the `CHAT_COMPLETION` tool schema. @@ -33,6 +35,7 @@ OpenRouter is a unified API for accessing AI models from multiple providers (Ope --> ### 🌐 **API Routes** + - **`POST /api/chat`** - Streams OpenRouter responses directly from the worker (no intermediate tool call required) ## Installation @@ -86,16 +89,16 @@ const { models } = await LIST_MODELS({ filter: { modality: "text+image->text", maxPromptPrice: 5.0, - minContextLength: 100000 + minContextLength: 100000, }, - sortBy: "price" + sortBy: "price", }); // Search for specific models const { models } = await LIST_MODELS({ filter: { - search: "gpt-4" - } + search: "gpt-4", + }, }); // Jump to the second page (results 51-100) @@ -121,7 +124,7 @@ Get comprehensive information about a specific model: ```typescript const model = await GET_MODEL({ - modelId: "anthropic/claude-3.5-sonnet" + modelId: "anthropic/claude-3.5-sonnet", }); console.log(model.pricing); // { prompt: "3", completion: "15" } @@ -135,16 +138,12 @@ Compare multiple models to choose the best one: ```typescript const { comparison, recommendation } = await COMPARE_MODELS({ - modelIds: [ - "openai/gpt-4o", - "anthropic/claude-3.5-sonnet", - "google/gemini-2.0-flash-exp" - ], - criteria: ["price", "context_length"] + modelIds: ["openai/gpt-4o", "anthropic/claude-3.5-sonnet", "google/gemini-2.0-flash-exp"], + criteria: ["price", "context_length"], }); console.log(recommendation); -// "google/gemini-2.0-flash-exp is most cost-effective. +// "google/gemini-2.0-flash-exp is most cost-effective. // anthropic/claude-3.5-sonnet has the largest context window." ``` @@ -158,11 +157,11 @@ const { recommendations } = await RECOMMEND_MODEL({ requirements: { maxCostPer1MTokens: 10, minContextLength: 50000, - prioritize: "quality" - } + prioritize: "quality", + }, }); -recommendations.forEach(rec => { +recommendations.forEach((rec) => { console.log(`${rec.name} (score: ${rec.score})`); console.log(`Reasoning: ${rec.reasoning}`); console.log(`Price: $${rec.pricing.promptPrice}/1M tokens\n`); @@ -332,18 +331,24 @@ openrouter/ OpenRouter supports three routing modes: 1. **Single Model**: Direct selection + ```typescript - { model: "openai/gpt-4o" } + { + model: "openai/gpt-4o"; + } ``` 2. **Auto Router**: Intelligent selection by NotDiamond + ```typescript - { model: "openrouter/auto" } + { + model: "openrouter/auto"; + } ``` 3. **Fallback Chain**: Array of models with automatic fallback ```typescript - { + { model: "openai/gpt-4o", models: ["anthropic/claude-3.5-sonnet", "google/gemini-2.0-flash-exp"] } @@ -463,13 +468,19 @@ Configuration is managed through the Deco platform when you install the MCP. ## Best Practices ### 1. **Use Auto-Router by Default** + Let OpenRouter choose the best model for your prompt: + ```typescript -{ model: "openrouter/auto" } +{ + model: "openrouter/auto"; +} ``` ### 2. **Set Up Fallback Chains** + Ensure reliability with fallbacks: + ```typescript { model: "openai/gpt-4o", @@ -478,13 +489,17 @@ Ensure reliability with fallbacks: ``` ### 3. **Monitor Costs** + Always check cost estimates: + ```typescript console.log(`Cost: $${response.estimatedCost.total}`); ``` ### 4. **Filter Models Appropriately** + Use filters to find the right model: + ```typescript { filter: { @@ -496,6 +511,7 @@ Use filters to find the right model: ``` ### 5. **Use Streaming for Long Responses** + For long-form content, hit `POST /api/chat` with `stream: true` semantics and pipe the response to your UI (see the examples above referencing the [OpenRouter streaming guide](https://openrouter.ai/docs/api-reference/streaming)). ## Limitations @@ -521,6 +537,7 @@ Contributions are welcome! Please follow the existing code structure and add tes ## Changelog ### v1.0.0 - Initial Release + - ✅ Model discovery and filtering - ✅ Model comparison and recommendations - ✅ Chat completions (streaming and non-streaming) @@ -534,4 +551,3 @@ Contributions are welcome! Please follow the existing code structure and add tes - [OpenRouter](https://openrouter.ai) - Unified AI model API - [Deco MCP Platform](https://deco.cx) - MCP hosting and management - [Model Context Protocol](https://modelcontextprotocol.io) - MCP specification - diff --git a/openrouter/app.json b/openrouter/app.json index 0f1cc4a0..3389b23b 100644 --- a/openrouter/app.json +++ b/openrouter/app.json @@ -16,4 +16,4 @@ "short_description": "OpenRouter App Connection for LLM uses.", "mesh_description": "The OpenRouter MCP provides a unified gateway to access multiple large language models (LLMs) through a single integration point. OpenRouter aggregates various AI model providers including OpenAI, Anthropic, Google, Meta, and many others, offering a standardized API interface for chat completions and text generation. This MCP enables AI agents to dynamically select and utilize different LLMs based on specific requirements such as cost, speed, capability, or context window size. It supports model routing, fallback strategies, cost tracking, and usage analytics. The integration is perfect for applications that need flexibility in model selection, want to optimize costs by using the most appropriate model for each task, or require high availability through automatic failover between providers. Ideal for developers building AI-powered applications, chatbots, content generation systems, or multi-model comparison tools." } -} \ No newline at end of file +} diff --git a/openrouter/package.json b/openrouter/package.json index d1c5fb22..afbad8bb 100644 --- a/openrouter/package.json +++ b/openrouter/package.json @@ -1,9 +1,14 @@ { "name": "@decocms/openrouter", "version": "1.0.0", - "description": "OpenRouter AI model routing and management", "private": true, + "description": "OpenRouter AI model routing and management", "type": "module", + "exports": { + "./tools": "./server/tools/index.ts", + "./types": "./server/lib/types.ts", + "./hooks": "./server/tools/hooks.ts" + }, "scripts": { "dev": "bun run --hot server/main.ts", "build:server": "NODE_ENV=production bun build server/main.ts --target=bun --outfile=dist/server/main.js", @@ -11,11 +16,6 @@ "publish": "cat app.json | deco registry publish -w /shared/deco -y", "check": "tsc --noEmit" }, - "exports": { - "./tools": "./server/tools/index.ts", - "./types": "./server/lib/types.ts", - "./hooks": "./server/tools/hooks.ts" - }, "dependencies": { "@ai-sdk/provider": "^3.0.2", "@ai-sdk/provider-utils": "^4.0.4", @@ -41,4 +41,4 @@ "engines": { "node": ">=22.0.0" } -} \ No newline at end of file +} diff --git a/openrouter/server/lib/openrouter-client.ts b/openrouter/server/lib/openrouter-client.ts index 0218c039..d7b9b1d3 100644 --- a/openrouter/server/lib/openrouter-client.ts +++ b/openrouter/server/lib/openrouter-client.ts @@ -111,7 +111,9 @@ export class OpenRouterClient { */ async getGeneration(generationId: string): Promise { const response: GetGenerationResponse = - await this.sdk.generations.getGeneration({ id: generationId }); + await this.sdk.generations.getGeneration({ + id: generationId, + }); return this.toGenerationInfo(response.data); } diff --git a/openrouter/tsconfig.json b/openrouter/tsconfig.json index 63f6f70b..48f9a80d 100644 --- a/openrouter/tsconfig.json +++ b/openrouter/tsconfig.json @@ -34,9 +34,5 @@ /* Types */ "types": ["@cloudflare/workers-types"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ] + "include": ["server", "shared", "vite.config.ts"] } diff --git a/package.json b/package.json index fb05a915..55b6d352 100644 --- a/package.json +++ b/package.json @@ -1,24 +1,8 @@ { "name": "@decocms/mcps", "version": "1.0.0", - "description": "First-party MCPs maintained by the decocms team.", - "type": "module", "private": true, - "devDependencies": { - "@types/bun": "latest", - "typescript": "5.9.3", - "oxfmt": "^0.9.0", - "oxlint": "^1.26.0" - }, - "scripts": { - "new": "bun scripts/new.ts", - "fmt": "oxfmt", - "lint": "oxlint", - "check": "bun run ./scripts/check.ts", - "clean": "rm -rf node_modules */node_modules */dist */.vite */.wrangler */.deco", - "prepare": "sh scripts/setup-hooks.sh", - "build": "bun run ./scripts/build-mcp.ts" - }, + "description": "First-party MCPs maintained by the decocms team.", "workspaces": [ "airtable", "blog-post-generator", @@ -75,9 +59,25 @@ "whatsapp-management", "whisper" ], + "type": "module", + "scripts": { + "new": "bun scripts/new.ts", + "fmt": "oxfmt", + "lint": "oxlint", + "check": "bun run ./scripts/check.ts", + "clean": "rm -rf node_modules */node_modules */dist */.vite */.wrangler */.deco", + "prepare": "sh scripts/setup-hooks.sh", + "build": "bun run ./scripts/build-mcp.ts" + }, "dependencies": { "@decocms/runtime": "^1.2.10", "@types/node": "^24.10.0", "zod": "^3.24.3" + }, + "devDependencies": { + "@types/bun": "latest", + "oxfmt": "^0.9.0", + "oxlint": "^1.26.0", + "typescript": "5.9.3" } } diff --git a/perplexity/README.md b/perplexity/README.md index 4e1d5bc3..329ed60e 100644 --- a/perplexity/README.md +++ b/perplexity/README.md @@ -1,4 +1,4 @@ -# Perplexity AI MCP +# Perplexity AI MCP ## Project Description @@ -7,6 +7,7 @@ ### Purpose This MCP server allows client applications to: + - Ask questions in natural language and receive web-grounded answers - Conduct multi-turn conversations with message history context - Customize search parameters (domains, recency, context) @@ -38,27 +39,32 @@ This MCP server allows client applications to: ### Local Installation 1. Clone the repository and enter the Perplexity directory: + ```bash git clone https://github.com/deco-cx/mcps.git cd mcps/perplexity ``` 2. Install dependencies: + ```bash bun install ``` 3. Configure the necessary environment variables: + ```bash bun run configure ``` 4. Generate TypeScript types: + ```bash bun run gen ``` 5. Start the development server: + ```bash bun run dev ``` @@ -105,8 +111,8 @@ const result = await client.callTool("chat_with_perplexity", { messages: [ { role: "user", content: "What is artificial intelligence?" }, { role: "assistant", content: "AI is the simulation of processes..." }, - { role: "user", content: "What are the main applications?" } - ] + { role: "user", content: "What are the main applications?" }, + ], }); ``` @@ -118,7 +124,7 @@ const result = await client.callTool("ask_perplexity", { search_recency_filter: "day", search_domain_filter: ["techcrunch.com", "theverge.com"], search_context_size: "maximum", - model: "sonar-pro" + model: "sonar-pro", }); ``` @@ -128,7 +134,7 @@ const result = await client.callTool("ask_perplexity", { const result = await client.callTool("ask_perplexity", { prompt: "Explain the Pythagorean theorem and how to prove it", model: "sonar-reasoning-pro", - temperature: 0.1 + temperature: 0.1, }); ``` @@ -137,7 +143,7 @@ const result = await client.callTool("ask_perplexity", { ```typescript try { const result = await client.callTool("ask_perplexity", { - prompt: "My question..." + prompt: "My question...", }); console.log(result.answer); } catch (error) { @@ -169,17 +175,23 @@ perplexity/ The project uses the following Cloudflare Workers bindings: #### `PERPLEXITY_API_KEY` + Perplexity AI API key: + - Get your key at: https://www.perplexity.ai/settings/api - Configure during integration installation #### `DEFAULT_MODEL` + Default model to use (optional): + - Options: `sonar`, `sonar-pro`, `sonar-deep-research`, `sonar-reasoning-pro`, `sonar-reasoning` - Default: `sonar` #### `PERPLEXITY_CONTRACT` + Authorization and pay-per-use system: + - `CONTRACT_AUTHORIZE`: Authorizes a transaction before the query - `CONTRACT_SETTLE`: Settles the transaction after the query - **Configured clauses:** @@ -187,7 +199,9 @@ Authorization and pay-per-use system: - `perplexity:chat`: $0.02 per chat message #### `FILE_SYSTEM` + File storage system: + - `FS_READ`: Reads files from the file system - `FS_WRITE`: Writes files to the file system @@ -231,9 +245,11 @@ const StateSchema = BaseStateSchema.extend({ ### Available MCP Tools #### `ask_perplexity` + Asks a simple question to Perplexity AI. **Parameters:** + - `prompt` (string, required): The question or prompt - `model` (string, optional): Model to use (default: "sonar") - `max_tokens` (number, optional): Maximum tokens in the response @@ -246,9 +262,11 @@ Asks a simple question to Perplexity AI. - `search_context_size` (string, optional): Amount of context ("low", "medium", "high", "maximum") #### `chat_with_perplexity` + Maintains a multi-turn conversation with Perplexity AI. **Parameters:** + - `messages` (Message[], required): Array of conversation messages - Each message: `{ role: "system" | "user" | "assistant", content: string }` - All other parameters from `ask_perplexity` are also available @@ -264,6 +282,7 @@ Maintains a multi-turn conversation with Perplexity AI. ### Input/Output Format #### Input (`ask_perplexity`) + ```typescript { prompt: string; @@ -275,16 +294,20 @@ Maintains a multi-turn conversation with Perplexity AI. ``` #### Output + ```typescript { - content: [{ - type: "text", - text: string // Stringified JSON with answer, usage, etc - }] + content: [ + { + type: "text", + text: string, // Stringified JSON with answer, usage, etc + }, + ]; } ``` JSON format: + ```typescript { answer: string; // Generated answer diff --git a/perplexity/package.json b/perplexity/package.json index a17a31f9..b795ef86 100644 --- a/perplexity/package.json +++ b/perplexity/package.json @@ -1,8 +1,8 @@ { "name": "perplexity", "version": "1.0.0", - "description": "Perplexity AI MCP Server - Web-backed AI answers and conversations", "private": true, + "description": "Perplexity AI MCP Server - Web-backed AI answers and conversations", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/perplexity/tsconfig.json b/perplexity/tsconfig.json index 77db41f1..b4b02a45 100644 --- a/perplexity/tsconfig.json +++ b/perplexity/tsconfig.json @@ -29,7 +29,5 @@ "server/*": ["./server/*"] } }, - "include": [ - "server" - ] + "include": ["server"] } diff --git a/pinecone/package.json b/pinecone/package.json index e8be614f..ff426f97 100644 --- a/pinecone/package.json +++ b/pinecone/package.json @@ -1,8 +1,8 @@ { "name": "pinecone", "version": "1.0.0", - "description": "MCP server for Pinecone Assistant API integration", "private": true, + "description": "MCP server for Pinecone Assistant API integration", "type": "module", "scripts": { "dev": "deco dev --vite", diff --git a/pinecone/tsconfig.json b/pinecone/tsconfig.json index c5b23929..392b6275 100644 --- a/pinecone/tsconfig.json +++ b/pinecone/tsconfig.json @@ -34,9 +34,5 @@ /* Types */ "types": ["@cloudflare/workers-types"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ] + "include": ["server", "shared", "vite.config.ts"] } diff --git a/pinecone/wrangler.toml b/pinecone/wrangler.toml index 28780d6a..cd0a3dba 100644 --- a/pinecone/wrangler.toml +++ b/pinecone/wrangler.toml @@ -2,7 +2,7 @@ name = "pinecone" main = "server/main.ts" compatibility_date = "2025-06-17" -compatibility_flags = [ "nodejs_compat" ] +compatibility_flags = ["nodejs_compat"] scope = "deco" [deco] @@ -40,4 +40,4 @@ description = "$0.002 per 1000 fetches" [[deco.bindings.contract.clauses]] id = "pinecone:delete" price = 0.001 -description = "$0.001 per 1000 deletes" \ No newline at end of file +description = "$0.001 per 1000 deletes" diff --git a/postgresql/README.md b/postgresql/README.md index a9d50eea..42cdc048 100644 --- a/postgresql/README.md +++ b/postgresql/README.md @@ -1,9 +1,11 @@ # What is WayStation - [WayStation](https://waystation.ai) connects Claude Desktop, ChatGPT and any MCP host with the productivity tools you use daily such as Notion, Monday, Airtable, Jira etc. through a no-code, secure integration hub. -***The original local WayStation MCP server has been deprecated in favor of the new remote MCP server hosted at https://waystation.ai/mcp. Please refer to the new WayStation MCP server documentation here*** + [WayStation](https://waystation.ai) connects Claude Desktop, ChatGPT and any MCP host with the productivity tools you use daily such as Notion, Monday, Airtable, Jira etc. through a no-code, secure integration hub. + +**_The original local WayStation MCP server has been deprecated in favor of the new remote MCP server hosted at https://waystation.ai/mcp. Please refer to the new WayStation MCP server documentation here_** ## Overview + WayStation MCP server is a universal remote MCP server that seamlessly connects Claude (and other clients) to a broad range of productivity tools, including Notion, Monday, AirTable, etc. - WayStation MCP supports both Streamable HTTPS and SSE transports @@ -11,16 +13,19 @@ WayStation MCP server is a universal remote MCP server that seamlessly connects - WayStation also provides preauthenticated individual endpoints like https://waystation.ai/mcp/. Any registered user can get one in their dashboard at https://waystation.ai/dashboard ## Supported providers + - WayStation supports the following productivity apps: [Notion](https://waystation.ai/connect/notion), [Monday](https://waystation.ai/connect/monday), [Asana](https://waystation.ai/connect/asana), [Linear](https://waystation.ai/connect/linear), [Atlassian JIRA/Confluence](https://waystation.ai/connect/atlassian), [Slack](https://waystation.ai/connect/slack), [Teams](https://waystation.ai/connect/teams), [Google Drive](https://waystation.ai/connect/gdrive) (including Docs and Sheets), [Office 365](https://waystation.ai/connect/office), [Airtable](https://waystation.ai/connect/airtable), [Miro](https://waystation.ai/connect/miro), [Intercom](https://waystation.ai/connect/intercom), [PayPal](https://waystation.ai/connect/paypal). - Users can browse available integrations/providers in the [Integrations Marketplace](https://waystation.ai/marketplace) - New integrations are added regularly based on customer requests or community contributions. If you have an integration request, please contact us at support@waystation.ai. - Users can connect their apps in the [dashboard](https://waystation.ai/dashboard). The connection process may vary by app but generally involves OAuth2 authentication flow with some additional steps for certain apps. ## Supported AI apps + - WayStation remote MCP was tested with Claude, Cursor, Cline, WindSurf, and MCP-remote STDIO proxy provider - For Claude, user should go into their Settings, then Integrations and click "Add Integration". Then enter "WayStation" as the Server Name and unique MCP URL from user's dashboard - For Cline, user should simply go into the MCP Server screen, switch to the Remote Servers tab, enter "WayStation" as the Server Name and unique MCP URL from user's dashboard - For Cursor, user should go to the Cursor Settings, MCP tab and click "Add new global MCP server". In mcp.json file user should add the entry for WayStation as following: + ```json "WayStation": { "url": "https://waystation.ai/mcp/" @@ -28,7 +33,9 @@ WayStation MCP server is a universal remote MCP server that seamlessly connects ``` ## Use Cases + WayStation supports a variety of productivity and automation use cases listed below: + - [Project Management](https://waystation.ai/ai/project-management) - [Task Automation](https://waystation.ai/ai/task-automation) - [Meeting Summaries & Action Items](https://waystation.ai/ai/meeting-summaries) diff --git a/postman/README.md b/postman/README.md index ee56c943..ec42df6f 100644 --- a/postman/README.md +++ b/postman/README.md @@ -21,4 +21,3 @@ https://go.postman.co/settings/me/api-keys - **GitHub**: https://github.com/postmanlabs/postman-mcp-server - **Website**: https://www.postman.com - **Documentation**: https://learning.postman.com/docs/developer/intro-api/ - diff --git a/postman/app.json b/postman/app.json index cc49a34d..e7ab029b 100644 --- a/postman/app.json +++ b/postman/app.json @@ -13,9 +13,17 @@ "metadata": { "categories": ["Development", "API Testing"], "official": true, - "tags": ["postman", "api", "testing", "collections", "requests", "development", "automation", "workflows"], + "tags": [ + "postman", + "api", + "testing", + "collections", + "requests", + "development", + "automation", + "workflows" + ], "short_description": "Official Postman MCP - Manage Postman collections, environments, and API workflows programmatically", "mesh_description": "The Postman MCP provides comprehensive integration with the Postman API platform, enabling AI agents to manage and automate API development workflows. This official MCP allows you to create, read, update, and organize Postman collections and requests, manage environments and variables for different testing contexts, execute API requests and tests programmatically, share and collaborate on API documentation, and integrate API testing into CI/CD pipelines. Key features include full CRUD operations on collections, automated test execution and reporting, environment variable management, workspace collaboration tools, and API monitoring capabilities. Perfect for developers who want to automate API testing, integrate Postman into development workflows, or build AI agents that can interact with and manage API collections. The MCP supports authentication via Postman API keys and provides access to all major Postman features through natural language commands." } } - diff --git a/readonly-sql/README.md b/readonly-sql/README.md index 1a52e856..d6aa4d87 100644 --- a/readonly-sql/README.md +++ b/readonly-sql/README.md @@ -22,6 +22,7 @@ A Model Context Protocol (MCP) server that provides secure, read-only SQL query ### Setup 1. Install dependencies: + ```bash bun install ``` @@ -35,21 +36,26 @@ bun install When installing this MCP, you'll need to provide: ### Database Type + Choose from: + - `postgres` - PostgreSQL database (currently supported) - `mysql` - MySQL database (planned) - `sqlite` - SQLite database (planned) ### Connection String + Format depends on your database type: **PostgreSQL:** + ``` postgresql://username:password@hostname:port/database postgresql://username:password@hostname:port/database?sslmode=require ``` Examples: + - Local: `postgresql://myuser:mypassword@localhost:5432/mydb` - Remote: `postgresql://user:pass@db.example.com:5432/production?sslmode=require` - Supabase: `postgresql://postgres:[YOUR-PASSWORD]@db.[PROJECT-REF].supabase.co:5432/postgres` @@ -62,11 +68,13 @@ Examples: Execute a read-only SQL query against the configured database. **Input:** + - `query` (string, required): The SQL query to execute. Must be read-only (SELECT, SHOW, DESCRIBE, EXPLAIN, etc.) - `params` (array, optional): Parameters for parameterized queries (use $1, $2, etc. for PostgreSQL) - `limit` (number, optional, default: 1000): Maximum number of rows to return **Output:** + - `rows` (array): Array of result rows, each row is an object with column names as keys - `totalRowCount` (number): Total number of rows that matched the query - `returnedCount` (number): Number of rows actually returned (after applying limit) @@ -78,19 +86,19 @@ Execute a read-only SQL query against the configured database. ```typescript // Simple query const result = await QUERY_SQL({ - query: "SELECT * FROM users WHERE active = true" + query: "SELECT * FROM users WHERE active = true", }); // Parameterized query (PostgreSQL) const result = await QUERY_SQL({ query: "SELECT * FROM users WHERE email = $1", - params: ["user@example.com"] + params: ["user@example.com"], }); // Query with custom limit const result = await QUERY_SQL({ query: "SELECT * FROM large_table", - limit: 100 + limit: 100, }); // Complex query with JOINs @@ -103,7 +111,7 @@ const result = await QUERY_SQL({ GROUP BY u.id, u.name ORDER BY order_count DESC `, - params: ["2024-01-01"] + params: ["2024-01-01"], }); ``` @@ -114,18 +122,23 @@ const result = await QUERY_SQL({ All queries are validated before execution to ensure they are read-only. The following are blocked: **Write Operations:** + - `INSERT`, `UPDATE`, `DELETE`, `TRUNCATE`, `MERGE`, `UPSERT` **Schema Modifications:** + - `CREATE`, `ALTER`, `DROP`, `RENAME` **Transaction Control:** + - `COMMIT`, `ROLLBACK`, `SAVEPOINT` **Permission Changes:** + - `GRANT`, `REVOKE` **Dangerous Operations:** + - `EXEC`, `EXECUTE`, `CALL` - File operations (`INTO OUTFILE`, `LOAD_FILE`) - System commands @@ -207,8 +220,8 @@ Example for MySQL: ```typescript // server/lib/clients/mysql.ts -import type { DatabaseClient, QueryResult } from '../db-client.ts'; -import mysql from 'mysql2/promise'; +import type { DatabaseClient, QueryResult } from "../db-client.ts"; +import mysql from "mysql2/promise"; export class MySQLClient implements DatabaseClient { private connection: mysql.Connection; @@ -264,4 +277,3 @@ MIT ## Support For issues, questions, or contributions, please visit the [GitHub repository](https://github.com/deco-cx/apps). - diff --git a/readonly-sql/package.json b/readonly-sql/package.json index 1382a5a9..3b0e4543 100644 --- a/readonly-sql/package.json +++ b/readonly-sql/package.json @@ -1,8 +1,8 @@ { "name": "readonly-sql", "version": "1.0.0", - "description": "Minimal read-only SQL MCP server for query handling", "private": true, + "description": "Minimal read-only SQL MCP server for query handling", "type": "module", "scripts": { "dev": "deco dev --vite", diff --git a/readonly-sql/tsconfig.json b/readonly-sql/tsconfig.json index 57ec6744..e084b553 100644 --- a/readonly-sql/tsconfig.json +++ b/readonly-sql/tsconfig.json @@ -34,13 +34,6 @@ /* Types */ "types": ["@cloudflare/workers-types"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ], - "exclude": [ - "server/**/tests/**", - "node_modules" - ] + "include": ["server", "shared", "vite.config.ts"], + "exclude": ["server/**/tests/**", "node_modules"] } diff --git a/readonly-sql/wrangler.toml b/readonly-sql/wrangler.toml index ba8fadb2..bfb6cbdd 100644 --- a/readonly-sql/wrangler.toml +++ b/readonly-sql/wrangler.toml @@ -2,7 +2,7 @@ name = "readonly-sql" main = "server/main.ts" compatibility_date = "2025-06-17" -compatibility_flags = [ "nodejs_compat" ] +compatibility_flags = ["nodejs_compat"] scope = "deco" [deco] diff --git a/reddit/README.md b/reddit/README.md index d34760b8..ff17fd09 100644 --- a/reddit/README.md +++ b/reddit/README.md @@ -9,6 +9,7 @@ MCP server para interagir com o Reddit. Permite buscar posts de subreddits e pes Busca posts de um subreddit específico. **Parâmetros:** + - `subreddit` (obrigatório): Nome do subreddit (sem o "r/"). Ex: "mcp", "programming", "news" - `sort` (opcional): Como ordenar os posts - "hot", "new", "top", "rising" (padrão: "hot") - `time` (opcional): Filtro de tempo para ordenação "top" - "hour", "day", "week", "month", "year", "all" @@ -16,6 +17,7 @@ Busca posts de um subreddit específico. - `after` (opcional): Cursor para paginação **Exemplo de uso:** + ``` Busque os posts mais recentes do r/mcp ``` @@ -25,6 +27,7 @@ Busque os posts mais recentes do r/mcp Pesquisa posts no Reddit por termo de busca. **Parâmetros:** + - `query` (obrigatório): Termo de busca - `subreddit` (opcional): Limitar busca a um subreddit específico - `sort` (opcional): Como ordenar - "relevance", "hot", "top", "new", "comments" (padrão: "relevance") @@ -33,6 +36,7 @@ Pesquisa posts no Reddit por termo de busca. - `after` (opcional): Cursor para paginação **Exemplo de uso:** + ``` Pesquise por "MCP server" no Reddit Busque posts sobre "AI agents" no r/LocalLLaMA @@ -57,5 +61,3 @@ bun run check # Deploy bun run deploy ``` - - diff --git a/reddit/package.json b/reddit/package.json index f6d04fcf..b069b3a1 100644 --- a/reddit/package.json +++ b/reddit/package.json @@ -1,8 +1,8 @@ { "name": "reddit", "version": "1.0.0", - "description": "MCP server for Reddit - search subreddits and browse posts", "private": true, + "description": "MCP server for Reddit - search subreddits and browse posts", "type": "module", "scripts": { "dev": "deco dev --vite", @@ -32,5 +32,3 @@ "node": ">=22.0.0" } } - - diff --git a/reddit/tsconfig.json b/reddit/tsconfig.json index f8bcfbbe..392b6275 100644 --- a/reddit/tsconfig.json +++ b/reddit/tsconfig.json @@ -34,11 +34,5 @@ /* Types */ "types": ["@cloudflare/workers-types"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ] + "include": ["server", "shared", "vite.config.ts"] } - - diff --git a/reddit/wrangler.toml b/reddit/wrangler.toml index 6e96672f..59f671bf 100644 --- a/reddit/wrangler.toml +++ b/reddit/wrangler.toml @@ -2,7 +2,7 @@ name = "reddit" main = "server/main.ts" compatibility_date = "2025-06-17" -compatibility_flags = [ "nodejs_compat" ] +compatibility_flags = ["nodejs_compat"] scope = "deco" [deco] diff --git a/redpanda/README.md b/redpanda/README.md index 395bd9cf..d9887efd 100644 --- a/redpanda/README.md +++ b/redpanda/README.md @@ -7,6 +7,7 @@ ### Purpose This MCP server allows client applications to: + - Query Redpanda documentation for configuration and deployment guidance - Find API references and code examples for producers, consumers, and streams - Access guidance on Redpanda cluster management and operations diff --git a/registry/package.json b/registry/package.json index cac46480..d0a3fadb 100644 --- a/registry/package.json +++ b/registry/package.json @@ -1,8 +1,8 @@ { "name": "registry", "version": "1.0.0", - "description": "MCP Registry Server", "private": true, + "description": "MCP Registry Server", "type": "module", "scripts": { "dev": "bun run server/main.ts", @@ -42,4 +42,4 @@ "engines": { "node": ">=22.0.0" } -} \ No newline at end of file +} diff --git a/registry/tsconfig.json b/registry/tsconfig.json index 56a6d3f2..f0eb4e14 100644 --- a/registry/tsconfig.json +++ b/registry/tsconfig.json @@ -34,9 +34,5 @@ /* Types */ "types": ["@types/node", "vite/client"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ] + "include": ["server", "shared", "vite.config.ts"] } diff --git a/replicate/README.md b/replicate/README.md index cb850b9f..955d131f 100644 --- a/replicate/README.md +++ b/replicate/README.md @@ -1,4 +1,4 @@ -# Replicate MCP +# Replicate MCP MCP (Model Context Protocol) server for interacting with the Replicate API, enabling execution of ML/AI models in the cloud. @@ -7,9 +7,11 @@ MCP (Model Context Protocol) server for interacting with the Replicate API, enab This MCP provides the following tools: ### 🚀 Run Model + Execute predictions using Replicate models. Supports any model available on the platform. **Usage example:** + ```typescript { model: "stability-ai/sdxl", @@ -23,42 +25,50 @@ Execute predictions using Replicate models. Supports any model available on the ``` ### 📊 Get Prediction + Get the status and results of a prediction by ID. **Usage example:** + ```typescript { - predictionId: "abc123xyz" + predictionId: "abc123xyz"; } ``` ### ❌ Cancel Prediction + Cancel a running prediction. **Usage example:** + ```typescript { - predictionId: "abc123xyz" + predictionId: "abc123xyz"; } ``` ### 📋 List Models + List available models from a specific user or organization. **Usage example:** + ```typescript { - owner: "stability-ai" + owner: "stability-ai"; } ``` ### 🔍 Get Model + Get detailed information about a specific model, including input/output schema. **Usage example:** + ```typescript { - model: "stability-ai/sdxl" + model: "stability-ai/sdxl"; } ``` @@ -72,6 +82,7 @@ Get detailed information about a specific model, including input/output schema. ### Installation 1. Install dependencies: + ```bash bun install ``` @@ -133,6 +144,6 @@ Replicate usage is consumption-based. Each model has its own cost per execution. ## Support For issues or questions: + - [Replicate Community](https://discord.gg/replicate) - [GitHub Issues](https://github.com/replicate/replicate) - diff --git a/replicate/package.json b/replicate/package.json index d9398109..6c7dc74b 100644 --- a/replicate/package.json +++ b/replicate/package.json @@ -1,8 +1,8 @@ { "name": "replicate", "version": "1.0.0", - "description": "MCP to interact with the Replicate API", "private": true, + "description": "MCP to interact with the Replicate API", "type": "module", "scripts": { "dev": "deco dev --vite", diff --git a/replicate/tsconfig.json b/replicate/tsconfig.json index c5b23929..392b6275 100644 --- a/replicate/tsconfig.json +++ b/replicate/tsconfig.json @@ -34,9 +34,5 @@ /* Types */ "types": ["@cloudflare/workers-types"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ] + "include": ["server", "shared", "vite.config.ts"] } diff --git a/replicate/wrangler.toml b/replicate/wrangler.toml index b0e3ad21..f423a8ea 100644 --- a/replicate/wrangler.toml +++ b/replicate/wrangler.toml @@ -2,7 +2,7 @@ name = "replicate" main = "server/main.ts" compatibility_date = "2025-06-17" -compatibility_flags = [ "nodejs_compat" ] +compatibility_flags = ["nodejs_compat"] scope = "deco" [deco] diff --git a/rootly/README.md b/rootly/README.md index 9ddf74aa..2db7a13d 100644 --- a/rootly/README.md +++ b/rootly/README.md @@ -1,4 +1,5 @@ + # Rootly MCP Server [![PyPI version](https://badge.fury.io/py/rootly-mcp-server.svg)](https://pypi.org/project/rootly-mcp-server/) @@ -194,13 +195,7 @@ For full functionality of tools like `get_oncall_handoff_summary`, `get_oncall_s "mcpServers": { "rootly": { "command": "uv", - "args": [ - "tool", - "run", - "--from", - "rootly-mcp-server", - "rootly-mcp-server" - ], + "args": ["tool", "run", "--from", "rootly-mcp-server", "rootly-mcp-server"], "env": { "ROOTLY_API_TOKEN": "" } @@ -251,11 +246,7 @@ docker run -p 8000:8000 \ "mcpServers": { "rootly": { "command": "uvx", - "args": [ - "--from", - "rootly-mcp-server", - "rootly-mcp-server" - ], + "args": ["--from", "rootly-mcp-server", "rootly-mcp-server"], "env": { "ROOTLY_API_TOKEN": "" } @@ -433,6 +424,7 @@ Want to get started quickly? We provide pre-built Claude Code skills that showca ### 🚨 [Rootly Incident Responder](examples/skills/rootly-incident-responder.md) An AI-powered incident response specialist that: + - Analyzes production incidents with full context - Finds similar historical incidents using ML-based similarity matching - Suggests solutions based on past successful resolutions @@ -442,6 +434,7 @@ An AI-powered incident response specialist that: - Provides confidence scores and time estimates **Quick Start:** + ```bash # Copy the skill to your project mkdir -p .claude/skills @@ -503,14 +496,13 @@ get_shift_incidents( Returns: `incidents` list + `summary` (counts, avg resolution time, grouping) - ## Contributing See [CONTRIBUTING.md](CONTRIBUTING.md) for developer setup and guidelines. ## Play with it on Postman -[Run In Postman](https://god.gw.postman.com/run-collection/45004446-1074ba3c-44fe-40e3-a932-af7c071b96eb?action=collection%2Ffork&source=rip_markdown&collection-url=entityId%3D45004446-1074ba3c-44fe-40e3-a932-af7c071b96eb%26entityType%3Dcollection%26workspaceId%3D4bec6e3c-50a0-4746-85f1-00a703c32f24) +[Run In Postman](https://god.gw.postman.com/run-collection/45004446-1074ba3c-44fe-40e3-a932-af7c071b96eb?action=collection%2Ffork&source=rip_markdown&collection-url=entityId%3D45004446-1074ba3c-44fe-40e3-a932-af7c071b96eb%26entityType%3Dcollection%26workspaceId%3D4bec6e3c-50a0-4746-85f1-00a703c32f24) ## About Rootly AI Labs diff --git a/shared/README.md b/shared/README.md index a87c17f5..f1b8c1fe 100644 --- a/shared/README.md +++ b/shared/README.md @@ -32,6 +32,7 @@ Reutilizable middlewares for wrapping async operations: - `applyMiddlewares(options)` - Compose multiple middlewares **Usage:** + ```typescript import { withRetry, @@ -42,15 +43,12 @@ import { const robustOperation = applyMiddlewares({ fn: async () => await apiCall(), - middlewares: [ - withLogging({ title: "My Operation" }), - withRetry(3), - withTimeout(60000), - ], + middlewares: [withLogging({ title: "My Operation" }), withRetry(3), withTimeout(60000)], }); ``` **Re-exported by:** + - `@decocms/mcps-shared/video-generators` - `@decocms/mcps-shared/image-generators` - `@decocms/mcps-shared/image-analyzers` @@ -87,13 +85,12 @@ export const createUploadFileTool = (env: Env) => outputSchema: fileUploadOutputSchema, execute: async ({ input }) => { return await withFileOperationErrorHandling(async () => { - const { file } = await createFileFromInput(input); - + const formData = createFormDataWithFile(file); - + const result = await apiClient.uploadFile(formData); - + return createFileUploadSuccess({ id: result.id, name: result.name, @@ -109,12 +106,14 @@ export const createUploadFileTool = (env: Env) => ##### File Upload **Input:** + - `fileUrl` (string, optional) - URL of the file to upload - `fileContent` (string, optional) - Direct file content (text) - `fileName` (string, optional) - Name of the file with extension - `metadata` (record, optional) - Metadata to attach to the file **Output:** + - `success` (boolean) - Whether the operation succeeded - `file` (object, nullable) - File information (id, name, status, created_on, updated_on, metadata) - `message` (string, optional) - Success or error message @@ -122,19 +121,23 @@ export const createUploadFileTool = (env: Env) => ##### File Delete **Input:** + - `fileId` (string, required) - ID of the file to delete **Output:** + - `success` (boolean) - Whether the operation succeeded - `message` (string, optional) - Success or error message ##### File Get **Input:** + - `fileId` (string, required) - ID of the file to retrieve - `includeUrl` (boolean, optional) - Whether to include signed URL **Output:** + - `success` (boolean) - Whether the operation succeeded - `file` (object, nullable) - File information with optional signed_url, percent_done, error_message - `message` (string, optional) - Success or error message @@ -142,11 +145,13 @@ export const createUploadFileTool = (env: Env) => ##### File List **Input:** + - `filter` (string, optional) - Optional filter for files (usually JSON) - `limit` (number, optional) - Maximum number of files to return - `offset` (number, optional) - Number of files to skip **Output:** + - `success` (boolean) - Whether the operation succeeded - `files` (array) - Array of file information objects - `total` (number, optional) - Total count of files @@ -161,7 +166,7 @@ Creates a File object from either a file URL or file content. ```typescript const { file, contentType } = await createFileFromInput({ fileUrl: "https://example.com/file.pdf", - fileName: "document.pdf" + fileName: "document.pdf", }); ``` @@ -172,7 +177,7 @@ Creates a FormData object ready for multipart upload. ```typescript const formData = createFormDataWithFile(file, "file", { userId: "123", - category: "documents" + category: "documents", }); ``` @@ -181,13 +186,10 @@ const formData = createFormDataWithFile(file, "file", { Wraps operations with standardized error handling. ```typescript -return await withFileOperationErrorHandling( - async () => { - // Your file operation - return result; - }, - "Failed to process file" -); +return await withFileOperationErrorHandling(async () => { + // Your file operation + return result; +}, "Failed to process file"); ``` ##### `createFileUploadSuccess(file, message?)` @@ -195,11 +197,14 @@ return await withFileOperationErrorHandling( Creates a standardized success response for uploads. ```typescript -return createFileUploadSuccess({ - id: "file-123", - name: "document.pdf", - status: "uploaded", -}, "File uploaded successfully"); +return createFileUploadSuccess( + { + id: "file-123", + name: "document.pdf", + status: "uploaded", + }, + "File uploaded successfully", +); ``` ##### `createFileDeleteSuccess(message?)` @@ -243,7 +248,7 @@ Builds URL query parameters for file listing. const query = buildFileListQueryParams({ filter: '{"type": "pdf"}', limit: 10, - offset: 20 + offset: 20, }); // Returns: "?filter=%7B%22type%22%3A%22pdf%22%7D&limit=10&offset=20" ``` @@ -274,18 +279,18 @@ import { export const generateImage = (env: Env) => { const executeGeneration = async ( input: GenerateImageInput, - env: Env + env: Env, ): Promise => { // Call your provider's API const response = await callProviderAPI(input.prompt, input.aspectRatio); - + // Save the image const { url } = await saveImageToFileSystem(env, { imageData: response.imageData, mimeType: "image/png", metadata: { prompt: input.prompt }, }); - + return { image: url }; }; @@ -306,11 +311,13 @@ export const generateImage = (env: Env) => { All image generators follow the same contract: **Input:** + - `prompt` (string, required) - Description of the image to be generated - `baseImageUrl` (string, optional) - URL of a base image for image-to-image - `aspectRatio` (enum, optional) - Aspect ratio: "1:1", "2:3", "3:2", "3:4", "4:3", "4:5", "5:4", "9:16", "16:9", "21:9" **Output:** + - `image` (string, optional) - URL of the generated image - `error` (boolean, optional) - Whether there was an error in generation - `finishReason` (string, optional) - Reason for completion (success, content filter, etc) @@ -346,6 +353,7 @@ const timedGeneration = withTimeout(executeGeneration, 60000); // 60 seconds Adds contract authorization and settlement for billing, plus **automatically includes retry and logging**. Options: + - `clauseId` (required) - Contract clause ID - `contract` (required) - Contract property name in environment - `provider` (optional) - Provider name for logs (default: "Provider") @@ -395,7 +403,7 @@ const robustExecute = withTimeout( provider: "Gemini", maxRetries: 3, }), - 60000 + 60000, ); ``` @@ -418,14 +426,14 @@ const generateImage = (env: Env) => { // Core generation logic const executeGeneration = async ( input: GenerateImageInput, - env: Env + env: Env, ): Promise => { // Call Gemini API const client = createGeminiClient(env); const response = await client.generateImage( input.prompt, input.baseImageUrl || undefined, - input.aspectRatio + input.aspectRatio, ); const candidate = response.candidates[0]; @@ -502,10 +510,10 @@ const generateImage = (env: Env) => { // Only implement DALL-E specific call const response = await fetch("https://api.openai.com/v1/images/generations", { method: "POST", - headers: { "Authorization": `Bearer ${env.OPENAI_API_KEY}` }, + headers: { Authorization: `Bearer ${env.OPENAI_API_KEY}` }, body: JSON.stringify({ prompt: input.prompt }), }); - + const data = await response.json(); return { image: data.data[0].url }; }; diff --git a/shared/audio-transcribers/README.md b/shared/audio-transcribers/README.md index 357e81b0..57e36a8a 100644 --- a/shared/audio-transcribers/README.md +++ b/shared/audio-transcribers/README.md @@ -35,7 +35,7 @@ export const whisperTools = createAudioTranscriberTools({ // Your transcription logic here const client = createWhisperClient(env); const response = await client.transcribeAudio(input.audioUrl); - + return { text: response.text, language: response.language, @@ -84,15 +84,19 @@ export const whisperTools = createAudioTranscriberTools({ ## Features ### Automatic Retry + Failed transcriptions are automatically retried up to 3 times with exponential backoff. ### Timeout Protection + Transcriptions that take longer than 5 minutes are automatically cancelled. ### Contract-Based Billing + Integrates with Deco's contract system for usage tracking and billing. ### Structured Logging + All operations are logged with timestamps and context for debugging. ## Supported Providers @@ -128,14 +132,15 @@ The module provides standardized error handling: ## Configuration ### Timeouts + - Default: 5 minutes - Configurable in `base.ts`: `MAX_TRANSCRIPTION_TIMEOUT_MS` ### Retries + - Default: 3 attempts - Configurable in `base.ts`: `MAX_TRANSCRIPTION_RETRIES` ## License MIT - diff --git a/shared/image-analyzers/README.md b/shared/image-analyzers/README.md index 18094ddc..aeed9776 100644 --- a/shared/image-analyzers/README.md +++ b/shared/image-analyzers/README.md @@ -21,16 +21,12 @@ export const visionTools = createImageAnalyzerTools({ description: "Analisa imagens usando My Vision API", }, getClient: (env) => createMyVisionClient(env), - + // Tool obrigatória: analyze analyzeTool: { execute: async ({ env, input, client }) => { - const response = await client.analyzeImage( - input.imageUrl, - input.prompt, - input.model - ); - + const response = await client.analyzeImage(input.imageUrl, input.prompt, input.model); + return { analysis: response.text, finishReason: response.finishReason, @@ -38,16 +34,12 @@ export const visionTools = createImageAnalyzerTools({ }; }, }, - + // Tool opcional: compare compareTool: { execute: async ({ env, input, client }) => { - const response = await client.compareImages( - input.imageUrls, - input.prompt, - input.model - ); - + const response = await client.compareImages(input.imageUrls, input.prompt, input.model); + return { comparison: response.text, finishReason: response.finishReason, @@ -55,21 +47,15 @@ export const visionTools = createImageAnalyzerTools({ }; }, }, - + // Tool opcional: extract text (OCR) extractTextTool: { execute: async ({ env, input, client }) => { - const languageHint = input.language - ? ` O texto está em ${input.language}.` - : ""; + const languageHint = input.language ? ` O texto está em ${input.language}.` : ""; const prompt = `Extraia TODO o texto visível nesta imagem.${languageHint}`; - - const response = await client.analyzeImage( - input.imageUrl, - prompt, - input.model - ); - + + const response = await client.analyzeImage(input.imageUrl, prompt, input.model); + return { text: response.text, finishReason: response.finishReason, @@ -98,40 +84,49 @@ export const tools = [ ## 📦 Ferramentas Disponíveis ### 1. `analyzeImage` (obrigatória) + Analisa uma única imagem com base em um prompt. **Input:** + - `imageUrl`: URL da imagem - `prompt`: Pergunta ou instrução sobre a imagem - `model`: (opcional) Modelo a usar **Output:** + - `analysis`: Texto com a análise - `finishReason`: Motivo do término - `usageMetadata`: Informações de uso de tokens ### 2. `compareImages` (opcional) + Compara múltiplas imagens. **Input:** + - `imageUrls`: Array de URLs (mínimo 2) - `prompt`: Como comparar as imagens - `model`: (opcional) Modelo a usar **Output:** + - `comparison`: Texto com a comparação - `finishReason`: Motivo do término - `usageMetadata`: Informações de uso de tokens ### 3. `extractTextFromImage` (opcional) + Extrai texto de imagens (OCR). **Input:** + - `imageUrl`: URL da imagem - `language`: (opcional) Idioma do texto - `model`: (opcional) Modelo a usar **Output:** + - `text`: Texto extraído - `finishReason`: Motivo do término - `usageMetadata`: Informações de uso de tokens @@ -139,22 +134,26 @@ Extrai texto de imagens (OCR). ## 🔧 Recursos Incluídos ### Middleware + - **Retry**: Tenta novamente em caso de falha (3 tentativas por padrão) - **Timeout**: Cancela após 60 segundos - **Logging**: Registra início, fim e erros ### Contract Support (Billing) + - **Authorization**: Autoriza cobranças antes da operação - **Settlement**: Finaliza cobrança após sucesso - **Rollback**: Não cobra em caso de falha - **Opcional**: Cada tool pode ter ou não contract ### Schemas Padronizados + Todos os inputs e outputs são validados com Zod, garantindo type-safety. ## 🎨 Exemplos de Providers ### Gemini Vision (com Contract) + ```typescript export const geminiVisionTools = createImageAnalyzerTools({ metadata: { @@ -204,6 +203,7 @@ export const geminiVisionTools = createImageAnalyzerTools({ ``` **wrangler.toml:** + ```toml [[deco.bindings]] type = "contract" @@ -229,47 +229,60 @@ description = "$0.03 per OCR operation" ``` ### GPT-4 Vision + ```typescript export const gpt4VisionTools = createImageAnalyzerTools({ metadata: { provider: "GPT-4 Vision", }, getClient: (env) => createOpenAIClient(env), - analyzeTool: { /* ... */ }, + analyzeTool: { + /* ... */ + }, // GPT-4V suporta múltiplas imagens nativamente - compareTool: { /* ... */ }, - extractTextTool: { /* ... */ }, + compareTool: { + /* ... */ + }, + extractTextTool: { + /* ... */ + }, }); ``` ### Claude Vision + ```typescript export const claudeVisionTools = createImageAnalyzerTools({ metadata: { provider: "Claude Vision", }, getClient: (env) => createAnthropicClient(env), - analyzeTool: { /* ... */ }, + analyzeTool: { + /* ... */ + }, // Claude também suporta múltiplas imagens - compareTool: { /* ... */ }, - extractTextTool: { /* ... */ }, + compareTool: { + /* ... */ + }, + extractTextTool: { + /* ... */ + }, }); ``` ## 🔍 Diferenças vs Video Generators -| Feature | Video Generators | Image Analyzers | -|---------|-----------------|----------------| -| **Operação Principal** | Gerar vídeo | Analisar imagem | -| **Storage** | Obrigatório (salvar vídeo) | Não usado (retorna texto) | -| **Contract** | ✅ Suportado | ✅ Suportado | -| **Timeout Padrão** | 6 minutos | 1 minuto | -| **Tools Opcionais** | list, extend | compare, extractText | -| **Contract por Tool** | Sim (generateTool obrigatório) | Sim (todas opcionais) | +| Feature | Video Generators | Image Analyzers | +| ---------------------- | ------------------------------ | ------------------------- | +| **Operação Principal** | Gerar vídeo | Analisar imagem | +| **Storage** | Obrigatório (salvar vídeo) | Não usado (retorna texto) | +| **Contract** | ✅ Suportado | ✅ Suportado | +| **Timeout Padrão** | 6 minutos | 1 minuto | +| **Tools Opcionais** | list, extend | compare, extractText | +| **Contract por Tool** | Sim (generateTool obrigatório) | Sim (todas opcionais) | ## 📚 Ver Também - [Video Generators](../video-generators/README.md) - [Image Generators](../image-generators/README.md) - [File Management](../tools/file-management/README.md) - diff --git a/shared/image-generators/README.md b/shared/image-generators/README.md index b2b2e839..e707212a 100644 --- a/shared/image-generators/README.md +++ b/shared/image-generators/README.md @@ -78,7 +78,7 @@ const result = await saveImage(storage, { writeExpiresIn: 60, // 1 minute (default) }); -console.log(result.url); // Public URL to access the image +console.log(result.url); // Public URL to access the image console.log(result.path); // Path where it was saved ``` @@ -97,11 +97,7 @@ const { mimeType, imageData } = extractImageData(inlineData); ```typescript import { S3Client } from "@aws-sdk/client-s3"; import { createTool } from "@decocms/runtime/mastra"; -import { - S3StorageAdapter, - saveImage, - extractImageData, -} from "@shared/image-generators"; +import { S3StorageAdapter, saveImage, extractImageData } from "@shared/image-generators"; import { z } from "zod"; export const createImageGeneratorTool = (env: Env) => { @@ -124,12 +120,10 @@ export const createImageGeneratorTool = (env: Env) => { execute: async ({ context, input }) => { // 1. Generate image with AI const generatedImage = await yourAIModel.generate(input.prompt); - + // 2. Extract image data - const { mimeType, imageData } = extractImageData( - generatedImage.inlineData - ); - + const { mimeType, imageData } = extractImageData(generatedImage.inlineData); + // 3. Save using injected storage const result = await saveImage(storage, { imageData, @@ -141,7 +135,7 @@ export const createImageGeneratorTool = (env: Env) => { }, directory: "/generated", }); - + return { url: result.url, path: result.path, @@ -315,4 +309,3 @@ This module is part of the MCPs monorepo. To contribute: ## License See LICENSE in the main repository. - diff --git a/shared/package.json b/shared/package.json index d8a02747..b8a48a37 100644 --- a/shared/package.json +++ b/shared/package.json @@ -1,34 +1,34 @@ { - "name": "@decocms/mcps-shared", - "version": "1.0.0", - "private": true, - "type": "module", - "description": "Shared dependencies for Deco CMS MCPs", - "exports": { - "./vite-plugin": "./deco-vite-plugin.ts", - "./tools/user": "./tools/user.ts", - "./tools/file-management": "./tools/file-management/index.ts", - "./tools": "./tools/index.ts", - "./tools/utils/api-client": "./tools/utils/api-client.ts", - "./tools/utils/middleware": "./tools/utils/middleware.ts", - "./image-generators": "./image-generators/index.ts", - "./image-analyzers": "./image-analyzers/index.ts", - "./video-generators": "./video-generators/index.ts", - "./audio-transcribers": "./audio-transcribers/index.ts", - "./storage": "./storage/index.ts", - "./search-ai": "./search-ai/index.ts", - "./serve": "./serve.ts", - "./registry": "./registry.ts", - "./google-oauth": "./google-oauth.ts", - "./whatsapp": "./whatsapp/index.ts", - "./api-key-manager": "./api-key-manager.ts", - "./mesh-chat": "./mesh-chat/index.ts" - }, - "devDependencies": { - "@decocms/bindings": "1.0.7", - "@decocms/runtime": "1.1.3", - "@types/bun": "^1.2.14", - "vite": "7.2.0", - "zod": "^4.0.0" - } + "name": "@decocms/mcps-shared", + "version": "1.0.0", + "private": true, + "description": "Shared dependencies for Deco CMS MCPs", + "type": "module", + "exports": { + "./vite-plugin": "./deco-vite-plugin.ts", + "./tools/user": "./tools/user.ts", + "./tools/file-management": "./tools/file-management/index.ts", + "./tools": "./tools/index.ts", + "./tools/utils/api-client": "./tools/utils/api-client.ts", + "./tools/utils/middleware": "./tools/utils/middleware.ts", + "./image-generators": "./image-generators/index.ts", + "./image-analyzers": "./image-analyzers/index.ts", + "./video-generators": "./video-generators/index.ts", + "./audio-transcribers": "./audio-transcribers/index.ts", + "./storage": "./storage/index.ts", + "./search-ai": "./search-ai/index.ts", + "./serve": "./serve.ts", + "./registry": "./registry.ts", + "./google-oauth": "./google-oauth.ts", + "./whatsapp": "./whatsapp/index.ts", + "./api-key-manager": "./api-key-manager.ts", + "./mesh-chat": "./mesh-chat/index.ts" + }, + "devDependencies": { + "@decocms/bindings": "1.0.7", + "@decocms/runtime": "1.1.3", + "@types/bun": "^1.2.14", + "vite": "7.2.0", + "zod": "^4.0.0" + } } diff --git a/shared/search-ai/README.md b/shared/search-ai/README.md index 1eb33dd4..64e1a8b5 100644 --- a/shared/search-ai/README.md +++ b/shared/search-ai/README.md @@ -112,10 +112,14 @@ export const mySearchTools = createSearchAITools({ provider: "My Search AI", }, getClient: (env) => new MySearchAIClient({ apiKey: env.state.API_KEY }), - maxRetries: 5, // Default: 3 - timeoutMs: 120_000, // Default: 60000 (1 minute) - askTool: { /* ... */ }, - chatTool: { /* ... */ }, + maxRetries: 5, // Default: 3 + timeoutMs: 120_000, // Default: 60000 (1 minute) + askTool: { + /* ... */ + }, + chatTool: { + /* ... */ + }, }); ``` @@ -272,6 +276,7 @@ if (result.error) { ``` **Success Output:** + - `error?: false` - Explicit discriminator (optional, defaults to undefined) - `answer: string` - The AI-generated answer - `model?: string` - Model used @@ -279,6 +284,7 @@ if (result.error) { - Plus optional `sources`, `related_questions`, `images` **Error Output:** + - `error: true` - Explicit discriminator (required) - `message?: string` - Error message - `finish_reason?: string` - Reason for failure @@ -294,4 +300,3 @@ if (result.error) { ## License MIT - diff --git a/shared/storage/README.md b/shared/storage/README.md index 1285adbb..7927da71 100644 --- a/shared/storage/README.md +++ b/shared/storage/README.md @@ -5,6 +5,7 @@ Unified storage module for all MCPs. Provides a consistent interface for working ## 🚀 Quick Start **Don't want to read everything? Start here:** + - 📖 **[Quick Start - 3 steps to save images](./QUICKSTART.md)** - 📖 **[All available providers (Supabase, R2, S3...)](./PROVIDERS.md)** - 📖 **[Complete Supabase guide](./SUPABASE_GUIDE.md)** @@ -192,16 +193,16 @@ export const createUploadTool = (env: Env) => execute: async ({ context }) => { // Auto-detect storage (FILE_SYSTEM or S3) const storage = createStorageFromEnv(env); - + // Generate upload URL const writeUrl = await storage.getWriteUrl(context.path, { expiresIn: 60, }); - + // Upload const buffer = Buffer.from(context.content, "base64"); await fetch(writeUrl, { method: "PUT", body: buffer }); - + // Return read URL const readUrl = await storage.getReadUrl(context.path, 3600); return { url: readUrl }; @@ -257,11 +258,7 @@ console.log(metadata.contentLength); // 12345 await storage.deleteObject("/image.png"); // Delete multiple (batch) -const result = await storage.deleteObjects([ - "/image1.png", - "/image2.png", - "/image3.png", -]); +const result = await storage.deleteObjects(["/image1.png", "/image2.png", "/image3.png"]); console.log(result.deleted); // ["image1.png", "image2.png", "image3.png"] console.log(result.errors); // [] ``` @@ -345,6 +342,7 @@ const storage = new S3StorageAdapter(config); ## 📦 Dependencies Optional (only if using S3): + ```bash npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner ``` @@ -357,4 +355,3 @@ When adding new adapters: 2. Add tests 3. Document in README 4. Add factory helper if appropriate - diff --git a/shared/tools/utils/README.md b/shared/tools/utils/README.md index d76d57f2..ade6d85b 100644 --- a/shared/tools/utils/README.md +++ b/shared/tools/utils/README.md @@ -17,6 +17,7 @@ Funções auxiliares para fazer requisições a APIs externas: - `downloadWithAuth(url, authHeaders, apiName)` - Download com autenticação **Exemplos:** + ```typescript import { makeApiRequest } from "@decocms/mcps-shared/tools/utils/api-client"; @@ -25,10 +26,10 @@ const data = await makeApiRequest( "https://api.example.com/endpoint", { method: "POST", - headers: { "Authorization": `Bearer ${token}` }, + headers: { Authorization: `Bearer ${token}` }, body: JSON.stringify({ prompt: "test" }), }, - "Example API" + "Example API", ); // Text response @@ -36,7 +37,7 @@ const text = await makeApiRequest( "https://api.example.com/text-endpoint", { method: "GET" }, "Example API", - "text" + "text", ); // Blob response @@ -44,7 +45,7 @@ const blob = await makeApiRequest( "https://api.example.com/file", { method: "GET" }, "Example API", - "blob" + "blob", ); ``` @@ -59,11 +60,13 @@ Middlewares reutilizáveis para wrapping de funções assíncronas com retry, lo Adiciona retry automático com exponential backoff. **Características:** + - Não faz retry de erros de validação (ZodError) - Não faz retry de erros 4xx (400, 401, 403, 404) - Backoff exponencial: 2s, 4s, 8s... **Exemplo:** + ```typescript import { withRetry } from "@decocms/mcps-shared/tools/utils/middleware"; @@ -79,10 +82,12 @@ const result = await resilientOperation(); Adiciona logging de performance e erros. **Opções:** + - `title` - Título para os logs - `startMessage` - Mensagem customizada de início (opcional) **Exemplo:** + ```typescript import { withLogging } from "@decocms/mcps-shared/tools/utils/middleware"; @@ -103,6 +108,7 @@ const loggedOperation = withLogging({ Adiciona timeout para prevenir operações muito longas. **Exemplo:** + ```typescript import { withTimeout } from "@decocms/mcps-shared/tools/utils/middleware"; @@ -118,6 +124,7 @@ const timedOperation = withTimeout(30000)(async () => { Compõe múltiplos middlewares em sequência. **Exemplo:** + ```typescript import { applyMiddlewares, @@ -128,11 +135,7 @@ import { const robustOperation = applyMiddlewares({ fn: async () => await apiCall(), - middlewares: [ - withLogging({ title: "API Call" }), - withRetry(3), - withTimeout(60000), - ], + middlewares: [withLogging({ title: "API Call" }), withRetry(3), withTimeout(60000)], }); const result = await robustOperation(); @@ -151,10 +154,8 @@ interface ContractClause { } interface Contract { - CONTRACT_AUTHORIZE: (input: { - clauses: ContractClause[]; - }) => Promise<{ transactionId: string }>; - + CONTRACT_AUTHORIZE: (input: { clauses: ContractClause[] }) => Promise<{ transactionId: string }>; + CONTRACT_SETTLE: (input: { transactionId: string; clauses: ContractClause[]; @@ -249,11 +250,7 @@ const operation = withRetry(3)(async () => { ```typescript const operation = applyMiddlewares({ fn: async () => await apiCall(), - middlewares: [ - withLogging({ title: "My Operation" }), - withRetry(3), - withTimeout(30000), - ], + middlewares: [withLogging({ title: "My Operation" }), withRetry(3), withTimeout(30000)], }); ``` @@ -267,13 +264,13 @@ const doExecute = async () => { try { const result = await operation(); - + await contract.CONTRACT_SETTLE({ transactionId, clauses: [{ clauseId: "operation:run", amount: 1 }], vendorId: env.DECO_CHAT_WORKSPACE, }); - + return result; } catch (error) { // Handle error @@ -283,11 +280,7 @@ const doExecute = async () => { const withMiddlewares = applyMiddlewares({ fn: doExecute, - middlewares: [ - withLogging({ title: "Paid Operation" }), - withRetry(3), - withTimeout(60000), - ], + middlewares: [withLogging({ title: "Paid Operation" }), withRetry(3), withTimeout(60000)], }); ``` @@ -315,4 +308,3 @@ const result = await flaky(); // "success" after 3 attempts - [Middleware](./middleware.ts) - [Video Generators](../../video-generators/README.md) - [Image Analyzers](../../image-analyzers/README.md) - diff --git a/shared/video-generators/README.md b/shared/video-generators/README.md index 83251a8a..8276d354 100644 --- a/shared/video-generators/README.md +++ b/shared/video-generators/README.md @@ -29,13 +29,13 @@ const tools = createVideoGeneratorTools({ execute: async ({ env, input }) => { // Start video generation (returns operation) const operation = await startVideoGeneration(env, input); - + // Wait for completion const completed = await pollOperation(env, operation.name); - + // Download as stream (memory efficient - no blob in RAM!) const videoStream = await downloadVideoAsStream(env, completed.videoUri); - + return { data: videoStream, // ✅ ReadableStream - streams directly to storage mimeType: "video/mp4", @@ -53,7 +53,7 @@ const tools = createVideoGeneratorTools({ }); // Use the tools -const [generateVideo] = tools.map(tool => tool(env)); +const [generateVideo] = tools.map((tool) => tool(env)); ``` ## Input Schema @@ -109,6 +109,7 @@ Support varies by model. Common options: - **8 seconds** - Full length (default, most models) > **Note:** Each model has specific duration capabilities. For example: +> > - Veo 3.x models: 4, 6, 8 seconds > - Veo 2.x models: 5, 6, 7, 8 seconds @@ -116,15 +117,15 @@ Support varies by model. Common options: Different models support different features. Here's a comparison: -| Model | Reference Images | Last Frame | Audio | Durations | -|-------|-----------------|------------|-------|-----------| -| `veo-3.1-generate-preview` | ✅ | ✅ | ✅ | 4, 6, 8s | -| `veo-3.1-fast-generate-preview` | ❌ | ✅ | ✅ | 4, 6, 8s | -| `veo-3.0-generate-001` | ❌ | ❌ | ✅ | 4, 6, 8s | -| `veo-3.0-fast-generate-001` | ❌ | ❌ | ✅ | 4, 6, 8s | -| `veo-3.0-generate-exp` | ❌ | ✅ | ✅ | 4, 6, 8s | -| `veo-2.0-generate-001` | ❌ | ✅ | ❌ | 5, 6, 7, 8s | -| `veo-2.0-generate-exp` | ✅ | ✅ | ❌ | 5, 6, 7, 8s | +| Model | Reference Images | Last Frame | Audio | Durations | +| ------------------------------- | ---------------- | ---------- | ----- | ----------- | +| `veo-3.1-generate-preview` | ✅ | ✅ | ✅ | 4, 6, 8s | +| `veo-3.1-fast-generate-preview` | ❌ | ✅ | ✅ | 4, 6, 8s | +| `veo-3.0-generate-001` | ❌ | ❌ | ✅ | 4, 6, 8s | +| `veo-3.0-fast-generate-001` | ❌ | ❌ | ✅ | 4, 6, 8s | +| `veo-3.0-generate-exp` | ❌ | ✅ | ✅ | 4, 6, 8s | +| `veo-2.0-generate-001` | ❌ | ✅ | ❌ | 5, 6, 7, 8s | +| `veo-2.0-generate-exp` | ✅ | ✅ | ❌ | 5, 6, 7, 8s | > **Note:** When using reference images with Veo, duration is automatically set to 8 seconds. @@ -163,16 +164,16 @@ The `saveVideo` function saves the generated video to the configured storage: ```typescript await saveVideo(storage, { - videoData: blob, // Blob or ArrayBuffer + videoData: blob, // Blob or ArrayBuffer mimeType: "video/mp4", metadata: { prompt: "...", operationName: "...", }, - directory: "/videos", // Target directory - readExpiresIn: 3600, // Read URL expiration time - writeExpiresIn: 300, // Write URL expiration time - fileName: "custom-name", // File name (optional) + directory: "/videos", // Target directory + readExpiresIn: 3600, // Read URL expiration time + writeExpiresIn: 300, // Write URL expiration time + fileName: "custom-name", // File name (optional) }); ``` @@ -191,7 +192,7 @@ getContract: (env) => ({ clauseId: "video_generation", amount: 100, // Cost in credits }, -}) +}); ``` ## Long-Running Operations @@ -205,30 +206,30 @@ Video generation often takes several minutes. The framework handles this with po ```typescript execute: async ({ env, input }) => { const client = createVeoClient(env); - + // Start operation (returns immediately with operation name) const operation = await client.generateVideo(input.prompt, "veo-3.1-generate-preview", { aspectRatio: input.aspectRatio, durationSeconds: input.duration, }); - + // Wait for completion (with automatic polling) const completed = await client.pollOperationUntilComplete( operation.name, 360000, // 6 minutes max - 10000, // poll every 10 seconds + 10000, // poll every 10 seconds ); - + // Stream video (no blob in memory!) const video = completed.response.generateVideoResponse.generatedSamples[0]; const videoStream = await client.downloadVideo(video.video.uri); - + return { data: videoStream, // ✅ Streams directly to storage mimeType: video.video.mimeType || "video/mp4", operationName: operation.name, }; -} +}; ``` ## Timeouts @@ -258,30 +259,24 @@ const tools = createVideoGeneratorTools({ // Veo only supports 16:9 and 9:16, default to 16:9 for other ratios const veoAspectRatio = - input.aspectRatio === "16:9" || input.aspectRatio === "9:16" - ? input.aspectRatio - : "16:9"; + input.aspectRatio === "16:9" || input.aspectRatio === "9:16" ? input.aspectRatio : "16:9"; // Start video generation - const operation = await client.generateVideo( - input.prompt, - "veo-3.1-generate-preview", - { - aspectRatio: veoAspectRatio, - durationSeconds: input.duration, - referenceImages: input.referenceImages, - firstFrameImageUrl: input.firstFrameUrl, - lastFrameImageUrl: input.lastFrameUrl, - personGeneration: input.personGeneration, - negativePrompt: input.negativePrompt, - }, - ); + const operation = await client.generateVideo(input.prompt, "veo-3.1-generate-preview", { + aspectRatio: veoAspectRatio, + durationSeconds: input.duration, + referenceImages: input.referenceImages, + firstFrameImageUrl: input.firstFrameUrl, + lastFrameImageUrl: input.lastFrameUrl, + personGeneration: input.personGeneration, + negativePrompt: input.negativePrompt, + }); // Poll until complete (6 minutes max, poll every 10 seconds) const completed = await client.pollOperationUntilComplete( operation.name, 360000, // 6 minutes - 10000, // poll every 10 seconds + 10000, // poll every 10 seconds ); // Check if completed successfully @@ -292,8 +287,7 @@ const tools = createVideoGeneratorTools({ }; } - const generatedSamples = - completed.response.generateVideoResponse.generatedSamples; + const generatedSamples = completed.response.generateVideoResponse.generatedSamples; if (!generatedSamples || generatedSamples.length === 0) { return { error: true, @@ -339,15 +333,16 @@ const tools = createVideoGeneratorTools({ execute: async ({ env, input }) => { // Download video as stream (this is the recommended approach!) const videoStream = await downloadVideoAsStream(url); - + return { - data: videoStream, // ✅ ReadableStream - streams directly to storage + data: videoStream, // ✅ ReadableStream - streams directly to storage mimeType: "video/mp4", }; -} +}; ``` **Why streaming is the default:** + - ✅ **Any size**: Handle 100MB, 500MB, 1GB+ videos without issues - ✅ **Constant memory**: ~5-10MB usage regardless of video size - ✅ **Faster**: No intermediate buffering or loading into RAM @@ -365,4 +360,3 @@ execute: async ({ env, input }) => { - 👤 **Person generation**: Control whether people appear in videos - 🚫 **Negative prompts**: Fine-tune output by specifying what to avoid - 🌊 **Stream-based processing**: Default behavior for handling videos of any size - diff --git a/slack-mcp/CACHE_ARCHITECTURE.md b/slack-mcp/CACHE_ARCHITECTURE.md index 8c830183..6651a40e 100644 --- a/slack-mcp/CACHE_ARCHITECTURE.md +++ b/slack-mcp/CACHE_ARCHITECTURE.md @@ -1,5 +1,5 @@ # Cache Architecture - DATABASE Binding & K8s Multi-Pod Support - + ## 🎯 Problem Statement ### The Challenge @@ -57,19 +57,20 @@ When user saves config in Mesh UI: // server/main.ts - onChange handler onChange: async (env, config) => { const state = config.state; // Has resolved bindings! - + // Step 1: Save to DATABASE (PostgreSQL) // env.MESH_REQUEST_CONTEXT.state.DATABASE is available here await saveConnectionConfig(env, configData); // ↑ Uses DATABASE.DATABASES_RUN_SQL internally - + // Step 2: Cache locally for webhooks await cacheConnectionConfig(configData); // ↑ Saves to ./data/slack-kv.json -} +}; ``` **Key Points:** + - `env` has `MESH_REQUEST_CONTEXT` with resolved bindings - `env.MESH_REQUEST_CONTEXT.state.DATABASE` is an MCP client, not a connection string! - We call `DATABASE.DATABASES_RUN_SQL({ sql, params })` to interact with PostgreSQL @@ -82,20 +83,21 @@ When Slack webhook arrives: // server/router.ts app.post("/slack/events/:connectionId", async (c) => { const connectionId = c.req.param("connectionId"); - + // Read from KV cache (no DATABASE binding needed!) const config = await getCachedConnectionConfig(connectionId); - + if (!config) { return c.json({ error: "Config not cached" }, 503); } - + // Process webhook with cached config await processWebhook(config); }); ``` **Key Points:** + - No MCP context → can't access DATABASE binding - Reads from local KV cache instead - Fast (local disk, no network call) @@ -139,6 +141,7 @@ setTimeout(async () => { ``` **What happens:** + 1. Pod starts with empty cache 2. After 2s, calls `SYNC_CONFIG_CACHE` tool (MCP context!) 3. Tool queries DATABASE binding for all configs @@ -155,14 +158,14 @@ export const syncCacheTool = { id: "SYNC_CONFIG_CACHE", async execute({ runtimeContext }) { const env = runtimeContext.env; - + // Step 1: Query DATABASE (only works in MCP context!) const configs = await runSQL( env, // Has MESH_REQUEST_CONTEXT with DATABASE binding "SELECT * FROM slack_connections", - [] + [], ); - + // Step 2: Cache each config locally for (const row of configs) { await cacheConnectionConfig({ @@ -171,13 +174,14 @@ export const syncCacheTool = { // ... other fields }); } - + return { success: true, synced: configs.length }; - } + }, }; ``` **Key Points:** + - Runs in MCP context (has DATABASE binding access) - Called automatically on startup (warm-up) - Can be called manually via health check or MCP client @@ -257,14 +261,14 @@ export async function runSQL( ): Promise { // Get DATABASE binding from MCP context const dbBinding = env.MESH_REQUEST_CONTEXT?.state?.DATABASE; - + if (!dbBinding) { throw new Error("DATABASE binding not available"); } - + // Call DATABASES_RUN_SQL tool const response = await dbBinding.DATABASES_RUN_SQL({ sql, params }); - + return response.result[0].results ?? []; } @@ -284,16 +288,14 @@ export async function saveConnectionConfig( #### config-cache.ts - Cache Interface ```typescript -export async function cacheConnectionConfig( - config: ConnectionConfig -): Promise { +export async function cacheConnectionConfig(config: ConnectionConfig): Promise { const kv = getKvStore(); // Persistent KV store const key = `config:${config.connectionId}`; await kv.set(key, config); // Saves to ./data/slack-kv.json } export async function getCachedConnectionConfig( - connectionId: string + connectionId: string, ): Promise { const kv = getKvStore(); const key = `config:${connectionId}`; @@ -316,7 +318,7 @@ GET /health "status": "ok", "uptime": 3600, "metrics": { - "configCacheSize": 5, // ← Number of cached configs + "configCacheSize": 5, // ← Number of cached configs "kvStoreSize": 150, "apiKeysCount": 5 }, @@ -353,14 +355,16 @@ curl -X POST http://localhost:8080/mcp \ { "jsonrpc": "2.0", "result": { - "content": [{ - "type": "text", - "text": { - "success": true, - "synced": 5, - "errors": [] + "content": [ + { + "type": "text", + "text": { + "success": true, + "synced": 5, + "errors": [] + } } - }] + ] } } ``` @@ -387,12 +391,13 @@ MESH_URL=https://mesh.example.com - For faster startup, use PV for `./data/` directory 2. **Readiness Probe** + ```yaml readinessProbe: httpGet: path: /health port: 8080 - initialDelaySeconds: 5 # Wait for warm-up + initialDelaySeconds: 5 # Wait for warm-up periodSeconds: 10 ``` @@ -426,11 +431,13 @@ MESH_URL=https://mesh.example.com ### Issue: Webhook returns 503 "Config not cached" **Causes:** + 1. New pod, warm-up not completed yet (wait 2-5 seconds) 2. Warm-up failed (check logs for DATABASE connection errors) 3. Config never saved in Mesh UI **Solutions:** + 1. Check health endpoint: `GET /health` → look at `configCacheSize` 2. Check logs for warm-up status: `[Warmup] ✅ Cache sync result` 3. Manually trigger sync: Call `SYNC_CONFIG_CACHE` tool @@ -446,7 +453,8 @@ MESH_URL=https://mesh.example.com **Cause:** One pod has stale cache -**Solution:** +**Solution:** + 1. Re-save config in Mesh UI → populates all caches via `onChange` 2. Or manually call `SYNC_CONFIG_CACHE` on affected pod @@ -468,6 +476,7 @@ Warm-up sync (all configs) | 100-500ms Expected: **99.9%+** after warm-up completes Cache miss only if: + - Brand new pod (<2s after startup) - Warm-up failed (rare, check DATABASE connectivity) @@ -490,15 +499,14 @@ Cache miss only if: ### Trade-offs -| Approach | Pros | Cons | -|-----------------------|-------------------------------|-----------------------------| -| DATABASE only | Simple, single source | Can't use in webhook routes | -| KV cache only | Fast, always available | Doesn't persist across pods | -| **Our solution** | Best of both worlds | Slightly more complex | +| Approach | Pros | Cons | +| ---------------- | ---------------------- | --------------------------- | +| DATABASE only | Simple, single source | Can't use in webhook routes | +| KV cache only | Fast, always available | Doesn't persist across pods | +| **Our solution** | Best of both worlds | Slightly more complex | --- **Last Updated:** 2026-01-28 **Author:** Slack MCP Team **Version:** 1.0 - diff --git a/slack-mcp/ENV_VARS.md b/slack-mcp/ENV_VARS.md index fb34b999..5e01bc61 100644 --- a/slack-mcp/ENV_VARS.md +++ b/slack-mcp/ENV_VARS.md @@ -11,6 +11,7 @@ SUPABASE_ANON_KEY= ``` **Como obter:** + 1. Supabase Dashboard → Settings → API 2. Copie "Project URL" → `SUPABASE_URL` 3. Copie "anon public" key → `SUPABASE_ANON_KEY` @@ -105,11 +106,11 @@ spec: template: spec: containers: - - name: slack-mcp - image: slack-mcp:latest - envFrom: - - secretRef: - name: slack-mcp-env + - name: slack-mcp + image: slack-mcp:latest + envFrom: + - secretRef: + name: slack-mcp-env ``` --- @@ -143,4 +144,3 @@ SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9... ``` Tudo mais é opcional! - diff --git a/slack-mcp/LLM_BINDINGS.md b/slack-mcp/LLM_BINDINGS.md index 24fb51df..0aa70a9a 100644 --- a/slack-mcp/LLM_BINDINGS.md +++ b/slack-mcp/LLM_BINDINGS.md @@ -33,14 +33,15 @@ Arquivos (imagens e áudio) são baixados do Slack e convertidos para base64: const downloaded = await downloadSlackFile(file.url_private, file.mimetype); return { - type: isAudio ? "audio" : "image", // Detecta automaticamente - data: downloaded.data, // Base64 - mimeType: downloaded.mimeType, // audio/mp4, image/png, etc. - name: file.name, // Nome original do arquivo + type: isAudio ? "audio" : "image", // Detecta automaticamente + data: downloaded.data, // Base64 + mimeType: downloaded.mimeType, // audio/mp4, image/png, etc. + name: file.name, // Nome original do arquivo }; ``` **Tipos de arquivo suportados:** + - **Imagens**: `image/png`, `image/jpeg`, `image/gif`, `image/webp` - **Áudio**: `audio/mp4`, `audio/mpeg`, `audio/ogg`, `audio/webm`, `audio/wav` - **Documentos**: `application/pdf`, `application/vnd.openxmlformats-officedocument.wordprocessingml.document` (DOCX) @@ -55,7 +56,7 @@ Para **PDF** e **DOCX**, o texto é extraído automaticamente usando bibliotecas import * as pdfParse from "pdf-parse"; const data = await pdfParse(buffer); -return data.text; // Texto extraído do PDF +return data.text; // Texto extraído do PDF ``` ```typescript @@ -63,10 +64,11 @@ return data.text; // Texto extraído do PDF import mammoth from "mammoth"; const result = await mammoth.extractRawText({ buffer }); -return result.value; // Texto extraído do DOCX +return result.value; // Texto extraído do DOCX ``` **Limitações:** + - ✅ **Texto puro** extraído com sucesso - ✅ Máximo **500KB** de texto após extração - ❌ **Imagens** dentro do PDF/DOCX não são processadas @@ -74,6 +76,7 @@ return result.value; // Texto extraído do DOCX - ⚠️ Arquivos muito grandes (>1.5MB) podem ser truncados **Logs esperados:** + ```bash [Slack] 📄 Extracting text from PDF: documento.pdf [Slack] 📄 Downloaded text file: documento.pdf (application/pdf, 12345 chars) @@ -97,6 +100,7 @@ return text; ``` **Formatos suportados:** + - `text/plain` → `.txt`, `.log`, `.env` - `application/json` → `.json` - `text/csv` → `.csv` @@ -104,20 +108,22 @@ return text; - Código-fonte: `.js`, `.ts`, `.tsx`, `.jsx`, `.py`, `.rb`, `.go`, `.java`, `.c`, `.cpp`, `.rs`, `.sh`, etc. **Como são enviados ao LLM:** + ```typescript // lib/llm.ts - messagesToPrompt() if (msg.textFiles && msg.textFiles.length > 0) { for (const file of msg.textFiles) { - parts.push({ - type: "text", - text: `\n\n[File: ${file.name}]\n\`\`\`${file.language}\n${file.content}\n\`\`\`` + parts.push({ + type: "text", + text: `\n\n[File: ${file.name}]\n\`\`\`${file.language}\n${file.content}\n\`\`\``, }); } } ``` **Exemplo de prompt gerado:** -``` + +```` Analise este arquivo JSON [File: config.json] @@ -126,8 +132,9 @@ Analise este arquivo JSON "version": "1.0.0", "settings": {...} } -``` -``` +```` + +```` ### 3. Construção do Prompt @@ -141,7 +148,7 @@ const prompt = [ role: "user", parts: [ { type: "text", text: "O que tem nesta imagem?" }, - { + { type: "file", url: "data:image/png;base64,iVBORw0KGgo...", filename: "image", @@ -150,7 +157,7 @@ const prompt = [ ] } ]; -``` +```` ## Formato de Mensagens @@ -160,10 +167,10 @@ Cada mensagem segue o formato `UIMessage` do Vercel AI SDK: ```typescript interface UIMessage { - id: string; // ID único obrigatório + id: string; // ID único obrigatório role: "user" | "assistant" | "system"; - parts: UIMessagePart[]; // Array de partes (texto, arquivos, etc.) - metadata?: unknown; // Metadados opcionais + parts: UIMessagePart[]; // Array de partes (texto, arquivos, etc.) + metadata?: unknown; // Metadados opcionais } ``` @@ -189,7 +196,8 @@ interface UIMessage { } ``` -**IMPORTANTE:** +**IMPORTANTE:** + - O `type` deve ser `"file"` (não `"image"` ou `"data-url"`) - O `url` deve ser um data URI completo com o prefixo `data:${mimeType};base64,` - Inclua `filename` e `mediaType` obrigatoriamente @@ -211,20 +219,23 @@ O bot usa a binding do **OpenAI Whisper** para transcrever áudios e enviar **ap 7. ✅ LLM recebe APENAS o texto (não recebe áudio) **Formatos suportados pelo Whisper:** + - `audio/flac`, `audio/m4a`, `audio/mp3`, `audio/mp4` - `audio/mpeg`, `audio/mpga`, `audio/oga`, `audio/ogg` - `audio/wav`, `audio/webm` **Configuração:** + ```typescript // No Mesh Admin, adicionar binding: -WHISPER: "@deco/whisper" +WHISPER: "@deco/whisper"; // Opcional: Definir URL pública do servidor -SERVER_PUBLIC_URL: "https://localhost-xxx.deco.host" +SERVER_PUBLIC_URL: "https://localhost-xxx.deco.host"; ``` **Vantagens:** + - ✅ Transcrição rápida e precisa (57+ idiomas) - ✅ Funciona mesmo que Slack não gere transcrição - ✅ LLM pode processar o texto normalmente @@ -232,6 +243,7 @@ SERVER_PUBLIC_URL: "https://localhost-xxx.deco.host" - ✅ Áudio é automaticamente limpo após 10 minutos **Logs esperados:** + ```bash [EventHandler] Audio file for transcription: { name: "audio_message.m4a", @@ -244,11 +256,12 @@ SERVER_PUBLIC_URL: "https://localhost-xxx.deco.host" **Sem Whisper:** Se a binding não estiver configurada e o usuário enviar áudio, o bot responderá com: + ``` -🎤 Áudio detectado! Para processar arquivos de áudio, é necessário +🎤 Áudio detectado! Para processar arquivos de áudio, é necessário ativar a integração Whisper no Mesh. -Entre em contato com o administrador para configurar o Whisper +Entre em contato com o administrador para configurar o Whisper e habilitar transcrição automática de áudios. ``` @@ -308,9 +321,7 @@ A API Decopilot retorna eventos Server-Sent Events (SSE): ```typescript // lib/llm.ts - generateLLMResponseWithStreaming() -const reader = response.body - .pipeThrough(new TextDecoderStream()) - .getReader(); +const reader = response.body.pipeThrough(new TextDecoderStream()).getReader(); let buffer = ""; let textContent = ""; @@ -318,14 +329,14 @@ let textContent = ""; while (!finished) { const { done, value } = await reader.read(); if (done) break; - + buffer += value; const lines = buffer.split("\n"); buffer = lines.pop() ?? ""; - + for (const line of lines) { const parsed = parseStreamLine(line); - + if (parsed?.type === "text-delta") { textContent += parsed.delta; await onStream(textContent, false); @@ -359,13 +370,7 @@ O contexto mantém as últimas mensagens da conversa: ```typescript // slack/utils/contextBuilder.ts -const messages = await buildLLMMessages( - channel, - text, - ts, - threadTs, - images -); +const messages = await buildLLMMessages(channel, text, ts, threadTs, images); ``` ### Formato do Contexto @@ -375,23 +380,23 @@ const messages = await buildLLMMessages( { id: "msg_xxx", role: "user", - parts: [{ type: "text", text: "..." }] + parts: [{ type: "text", text: "..." }], }, // ... mensagens anteriores { id: "msg_xxx", role: "user", - parts: [{ type: "text", text: "" }] + parts: [{ type: "text", text: "" }], }, { id: "msg_xxx", role: "user", parts: [ { type: "text", text: "Pergunta atual" }, - { type: "file", url: "data:image/...", filename: "image", mediaType: "image/png" } - ] - } -] + { type: "file", url: "data:image/...", filename: "image", mediaType: "image/png" }, + ], + }, +]; ``` ## Configuração @@ -413,10 +418,7 @@ const messages = await buildLLMMessages( ```typescript // main.ts -configureLLM( - config.modelId || DEFAULT_LANGUAGE_MODEL, - config.systemPrompt -); +configureLLM(config.modelId || DEFAULT_LANGUAGE_MODEL, config.systemPrompt); ``` ## Troubleshooting @@ -453,8 +455,9 @@ configureLLM( **Causa:** Header `Accept` faltando. **Solução:** Adicione o header: + ```typescript -Accept: "application/json, text/event-stream" +Accept: "application/json, text/event-stream"; ``` ### Problema: Erro de validação "expected string, received undefined" no campo "id" @@ -462,6 +465,7 @@ Accept: "application/json, text/event-stream" **Causa:** Mensagens sem campo `id`. **Solução:** Gere IDs únicos para cada mensagem: + ```typescript const generateMessageId = () => { const timestamp = Date.now(); @@ -480,50 +484,45 @@ const generateMessageId = () => { ```typescript // Enviar mensagem com imagem para LLM -const response = await fetch( - `${meshUrl}/api/${organizationId}/decopilot/stream`, - { - method: "POST", - headers: { - "Content-Type": "application/json", - "Authorization": `Bearer ${token}`, - "Accept": "application/json, text/event-stream" - }, - body: JSON.stringify({ - messages: [ - { - id: "msg_1234567890_abc123", - role: "user", - parts: [ - { - type: "text", - text: "Analise esta imagem" - }, - { - type: "file", - url: "data:image/png;base64,iVBORw0KGgo...", - filename: "screenshot.png", - mediaType: "image/png" - } - ] - } - ], - model: { - id: "anthropic/claude-sonnet-4.5", - connectionId: "conn_xxx" - }, - agent: { - id: "vir_xxx" +const response = await fetch(`${meshUrl}/api/${organizationId}/decopilot/stream`, { + method: "POST", + headers: { + "Content-Type": "application/json", + Authorization: `Bearer ${token}`, + Accept: "application/json, text/event-stream", + }, + body: JSON.stringify({ + messages: [ + { + id: "msg_1234567890_abc123", + role: "user", + parts: [ + { + type: "text", + text: "Analise esta imagem", + }, + { + type: "file", + url: "data:image/png;base64,iVBORw0KGgo...", + filename: "screenshot.png", + mediaType: "image/png", + }, + ], }, - stream: true - }) - } -); + ], + model: { + id: "anthropic/claude-sonnet-4.5", + connectionId: "conn_xxx", + }, + agent: { + id: "vir_xxx", + }, + stream: true, + }), +}); // Processar stream -const reader = response.body - .pipeThrough(new TextDecoderStream()) - .getReader(); +const reader = response.body.pipeThrough(new TextDecoderStream()).getReader(); let textContent = ""; let buffer = ""; @@ -531,15 +530,15 @@ let buffer = ""; while (true) { const { done, value } = await reader.read(); if (done) break; - + buffer += value; const lines = buffer.split("\n"); buffer = lines.pop() ?? ""; - + for (const line of lines) { if (line.startsWith("data: ")) { const data = JSON.parse(line.slice(6)); - + if (data.type === "text-delta") { textContent += data.delta; console.log(textContent); @@ -563,9 +562,6 @@ console.log("[LLM] Calling Decopilot API:", { hasAgent: !!agentId, stream: true, messageCount: messages.length, - hasImages: messages.some(m => - m.parts.some(p => p.type === "file") - ) + hasImages: messages.some((m) => m.parts.some((p) => p.type === "file")), }); ``` - diff --git a/slack-mcp/OPTIMIZATION_CHANGES.md b/slack-mcp/OPTIMIZATION_CHANGES.md index 3ed52c45..a92666e6 100644 --- a/slack-mcp/OPTIMIZATION_CHANGES.md +++ b/slack-mcp/OPTIMIZATION_CHANGES.md @@ -11,9 +11,11 @@ Este documento descreve as otimizações implementadas no Slack MCP para melhora ## ✅ 1. Remoção do DATABASE Binding ### Problema + O binding `DATABASE` estava declarado no `StateSchema` mas nunca era utilizado pelo código (0 referências encontradas). ### Solução + Removido o binding não utilizado do schema de configuração. **Arquivo modificado:** `server/types/env.ts` @@ -34,6 +36,7 @@ export const StateSchema = z.object({ ``` ### Impacto + - ✅ Schema mais limpo e fácil de entender - ✅ Menos confusão para desenvolvedores - ✅ Interface de configuração do Mesh simplificada @@ -43,25 +46,29 @@ export const StateSchema = z.object({ ## ✅ 2. Correção de Memory Leak no API Key Manager ### Problema + O `Map` de API keys persistentes crescia indefinidamente sem limpeza, causando: + - Memory leak em servidores long-running - API keys perdidas após restart (não sobreviviam a reinícios) ### Solução + Implementadas duas novas funções: #### 2.1. `loadApiKeyFromKV` + Carrega API keys do KV store após restart do servidor. ```typescript export async function loadApiKeyFromKV( connectionId: string, - getConfigFn: (id: string) => Promise<{ meshToken?: string } | null> + getConfigFn: (id: string) => Promise<{ meshToken?: string } | null>, ): Promise { // 1. Verifica cache em memória const cached = persistentApiKeys.get(connectionId); if (cached) return cached; - + // 2. Tenta carregar do KV (sobrevive a restarts) const config = await getConfigFn(connectionId); if (config?.meshToken) { @@ -69,17 +76,18 @@ export async function loadApiKeyFromKV( console.log(`[API-KEY] Loaded from KV for ${connectionId}`); return config.meshToken; } - + return null; } ``` #### 2.2. `cleanupOrphanedKeys` + Remove API keys de conexões que não existem mais (cleanup periódico). ```typescript export async function cleanupOrphanedKeys( - getConfigFn: (id: string) => Promise + getConfigFn: (id: string) => Promise, ): Promise { let cleaned = 0; for (const [connectionId] of persistentApiKeys) { @@ -97,6 +105,7 @@ export async function cleanupOrphanedKeys( ``` #### 2.3. Integração no `main.ts` + ```typescript // Tenta carregar API key do KV primeiro (survive restarts) let apiKey = await loadApiKeyFromKV(connectionId, readConnectionConfig); @@ -104,22 +113,30 @@ let apiKey = await loadApiKeyFromKV(connectionId, readConnectionConfig); // Se não encontrada, cria uma nova if (!apiKey) { apiKey = await getOrCreatePersistentApiKey({ - meshUrl, organizationId, connectionId, temporaryToken + meshUrl, + organizationId, + connectionId, + temporaryToken, }); } // Cleanup periódico a cada 1 hora -setInterval(async () => { - console.log("[API-KEY] Running periodic cleanup..."); - await cleanupOrphanedKeys(readConnectionConfig); -}, 60 * 60 * 1000); +setInterval( + async () => { + console.log("[API-KEY] Running periodic cleanup..."); + await cleanupOrphanedKeys(readConnectionConfig); + }, + 60 * 60 * 1000, +); ``` **Arquivos modificados:** + - `shared/api-key-manager.ts` - Novas funções - `slack-mcp/server/main.ts` - Integração e cleanup ### Impacto + - ✅ API keys sobrevivem a restarts (recovery automático) - ✅ Memory leak corrigido (cleanup periódico) - ✅ Mais confiável em produção @@ -129,6 +146,7 @@ setInterval(async () => { ## ✅ 3. Otimização do KV Store Cleanup ### Problema + - Cleanup rodava a cada 5 minutos (muitos I/O) - Sem limite máximo de entradas (risco de crescimento infinito) - Logs sem métricas úteis @@ -136,6 +154,7 @@ setInterval(async () => { ### Solução #### 3.1. Intervalo Aumentado + ```typescript // ANTES const CLEANUP_INTERVAL = 5 * 60 * 1000; // 5 minutos @@ -145,12 +164,13 @@ const CLEANUP_INTERVAL = 15 * 60 * 1000; // 15 minutos (-66% I/O) ``` #### 3.2. Limite Máximo com Alertas + ```typescript const MAX_ENTRIES = 10_000; // Limite recomendado async cleanup(): Promise { // ... limpeza de expirados ... - + // Verifica limite const sizeAfter = this.store.size; if (sizeAfter > MAX_ENTRIES) { @@ -159,19 +179,20 @@ async cleanup(): Promise { `Consider adjusting TTLs or archiving old data.` ); } - + // Log com métricas if (cleaned > 0) { console.log( `[KV] 🧹 Cleanup: ${cleaned} expired entries removed (${sizeBefore} → ${sizeAfter})` ); } - + return cleaned; } ``` #### 3.3. Função para Monitoramento + ```typescript export function getKvStoreSize(): number { return kvStore?.getSize() ?? 0; @@ -181,6 +202,7 @@ export function getKvStoreSize(): number { **Arquivo modificado:** `server/lib/kv.ts` ### Impacto + - ✅ -66% de operações de I/O (15min vs 5min) - ✅ Alertas quando limite é excedido - ✅ Métricas detalhadas no cleanup @@ -191,17 +213,20 @@ export function getKvStoreSize(): number { ## ✅ 4. Simplificação do EVENT_BUS ### Problema + O `EVENT_BUS` binding estava sendo usado **incorretamente** como fallback quando o LLM não estava configurado, enviando eventos que nunca eram consumidos. ### Solução #### 4.1. Mantido no Schema (Opcional) + ```typescript // MANTIDO para usos legítimos (reações, canais, etc.) -EVENT_BUS: BindingOf("@deco/event-bus").optional() +EVENT_BUS: BindingOf("@deco/event-bus").optional(); ``` #### 4.2. Removido Uso Incorreto + Substituídas 3 chamadas de `publishToEventBus()` (fallback) por mensagens amigáveis: ```typescript @@ -221,30 +246,34 @@ if (!isLLMConfigured()) { const warningMsg = "⚠️ Por favor, configure um LLM (Language Model) no Mesh para usar o bot.\n\n" + "Acesse as configurações da conexão no Mesh e selecione um provedor de modelo (como OpenAI, Anthropic, etc.)."; - + await replyInThread(channel, threadTs, warningMsg); - + // Deleta mensagem de "pensando..." se existir if (thinkingMsg?.ts) { await deleteMessage(channel, thinkingMsg.ts); } - + console.log("[EventHandler] LLM not configured - sent configuration warning"); return; } ``` #### 4.3. Removida Função `publishToEventBus` + A função de fallback foi completamente removida do código. **Arquivos modificados:** + - `server/slack/handlers/eventHandler.ts` - 3 substituições + função removida **Mantido:** + - `server/events.ts` - Ainda usado para eventos legítimos (reações, canais criados, etc.) - `EVENT_BUS` binding - Opcional no schema para usos futuros ### Impacto + - ✅ UX melhorado: mensagens claras quando LLM não configurado - ✅ Menos eventos inúteis no Event Bus - ✅ Código mais limpo e fácil de entender @@ -255,15 +284,19 @@ A função de fallback foi completamente removida do código. ## ✅ 5. Health Check Endpoint ### Problema + Sem endpoint de monitoramento para Kubernetes/produção, dificultando: + - Liveness probes - Readiness probes - Debugging de problemas em produção ### Solução + Criado endpoint `/health` com métricas do sistema. #### 5.1. Nova Função de Health Status + **Arquivo:** `server/health.ts` ```typescript @@ -293,6 +326,7 @@ export async function getHealthStatus(): Promise { ``` #### 5.2. Rota Atualizada + **Arquivo:** `server/router.ts` ```typescript @@ -310,6 +344,7 @@ app.get("/health", async (c) => { ``` #### 5.3. Exemplo de Resposta + ```json { "status": "ok", @@ -332,10 +367,12 @@ app.get("/health", async (c) => { ``` **Arquivos criados/modificados:** + - `server/health.ts` - Novo arquivo - `server/router.ts` - Rota atualizada ### Impacto + - ✅ Kubernetes liveness/readiness probes - ✅ Monitoramento de memória e recursos - ✅ Visibilidade do estado do sistema @@ -345,21 +382,22 @@ app.get("/health", async (c) => { ## 📊 Resumo do Impacto -| Métrica | Antes | Depois | Melhoria | -|---------|-------|--------|----------| -| **Bindings no Schema** | 6 | 5 | -16% (mais limpo) | -| **API Key recovery** | ❌ Perde no restart | ✅ Recarrega do KV | Confiável | -| **Memory leak risk** | ⚠️ Alto (Map infinito) | ✅ Baixo (cleanup) | Estável | -| **KV cleanup I/O** | A cada 5min | A cada 15min | -66% I/O | -| **EVENT_BUS fallback** | ❌ Eventos inúteis | ✅ Mensagens claras | UX melhor | -| **Health monitoring** | ❌ Nenhum | ✅ Endpoint completo | Observabilidade | -| **Alertas** | ❌ Nenhum | ✅ KV size limit | Proativo | +| Métrica | Antes | Depois | Melhoria | +| ---------------------- | ---------------------- | -------------------- | ----------------- | +| **Bindings no Schema** | 6 | 5 | -16% (mais limpo) | +| **API Key recovery** | ❌ Perde no restart | ✅ Recarrega do KV | Confiável | +| **Memory leak risk** | ⚠️ Alto (Map infinito) | ✅ Baixo (cleanup) | Estável | +| **KV cleanup I/O** | A cada 5min | A cada 15min | -66% I/O | +| **EVENT_BUS fallback** | ❌ Eventos inúteis | ✅ Mensagens claras | UX melhor | +| **Health monitoring** | ❌ Nenhum | ✅ Endpoint completo | Observabilidade | +| **Alertas** | ❌ Nenhum | ✅ KV size limit | Proativo | --- ## 🔄 Como Testar ### 1. API Key Recovery + ```bash # Terminal 1: Inicie o servidor bun run dev @@ -375,6 +413,7 @@ bun run dev ``` ### 2. KV Cleanup + ```bash # Verifique logs a cada 15 minutos [KV] 🧹 Cleanup: 2 expired entries removed (150 → 148) @@ -384,6 +423,7 @@ bun run dev ``` ### 3. EVENT_BUS (LLM não configurado) + ``` Usuário: @bot olá Bot: ⚠️ Por favor, configure um LLM (Language Model) no Mesh para usar o bot. @@ -392,6 +432,7 @@ Acesse as configurações da conexão no Mesh e selecione um provedor de modelo ``` ### 4. Health Check + ```bash curl http://localhost:3003/health @@ -407,6 +448,7 @@ curl http://localhost:3003/health ``` ### 5. API Key Cleanup + ```bash # Aguarde 1 hora ou force: # (Deletar config de uma conexão e esperar cleanup) @@ -421,6 +463,7 @@ curl http://localhost:3003/health ## 🚀 Deploy em Produção ### Kubernetes Probes + ```yaml apiVersion: apps/v1 kind: Deployment @@ -430,23 +473,24 @@ spec: template: spec: containers: - - name: slack-mcp - image: slack-mcp:latest - livenessProbe: - httpGet: - path: /health - port: 3003 - initialDelaySeconds: 10 - periodSeconds: 30 - readinessProbe: - httpGet: - path: /health - port: 3003 - initialDelaySeconds: 5 - periodSeconds: 10 + - name: slack-mcp + image: slack-mcp:latest + livenessProbe: + httpGet: + path: /health + port: 3003 + initialDelaySeconds: 10 + periodSeconds: 30 + readinessProbe: + httpGet: + path: /health + port: 3003 + initialDelaySeconds: 5 + periodSeconds: 10 ``` ### Monitoramento Recomendado + 1. **Health endpoint**: Monitor status != "ok" 2. **KV size**: Alert quando > 9000 (90% do limite) 3. **API Keys**: Alert se count > 100 (possível leak) @@ -458,11 +502,13 @@ spec: ## 📝 Próximos Passos (Opcional) ### Curto Prazo + - [ ] Adicionar rate limiting no health endpoint - [ ] Expor métricas Prometheus (`/metrics`) - [ ] Adicionar traces OpenTelemetry ### Longo Prazo + - [ ] Migrar KV Store para PostgreSQL (multi-pod K8s) - [ ] Implementar Redis cache (30s TTL) para reads - [ ] Adicionar criptografia de dados no KV store @@ -481,4 +527,3 @@ Todas as mudanças do plano de otimização foram implementadas com sucesso: 5. ✅ Health Check endpoint criado O Slack MCP agora está mais **confiável**, **eficiente** e **observável** em produção! 🎉 - diff --git a/slack-mcp/PERSISTENT_KV.md b/slack-mcp/PERSISTENT_KV.md index 1f92587e..e4b69351 100644 --- a/slack-mcp/PERSISTENT_KV.md +++ b/slack-mcp/PERSISTENT_KV.md @@ -12,6 +12,7 @@ O servidor MCP do Slack estava perdendo as configurações após reiniciar, caus ### Causa Raiz O KV store original usava **memória volátil** (`Map`), perdendo todos os dados quando: + - ✅ Servidor reinicia (Ctrl+C) - ✅ Hot reload (`--hot` flag) - ✅ Crash ou erro no código @@ -35,6 +36,7 @@ Substituímos o KV store em memória por um **KV store persistente** que salva o #### 1. KV Store com Persistência (`server/lib/kv.ts`) **Antes:** + ```typescript class KVStore { private store = new Map>(); @@ -43,6 +45,7 @@ class KVStore { ``` **Depois:** + ```typescript class KVStore { private store = new Map>(); @@ -167,6 +170,7 @@ const data = await kv.get("key"); ### Backup O arquivo `./data/slack-kv.json` pode ser: + - ✅ Copiado para backup - ✅ Versionado (sem dados sensíveis) - ✅ Migrado entre ambientes @@ -174,11 +178,13 @@ O arquivo `./data/slack-kv.json` pode ser: ## 🔒 Segurança ⚠️ **IMPORTANTE**: O arquivo `./data/slack-kv.json` contém: + - 🔐 Tokens do Slack (bot tokens) - 🔐 Signing secrets - 🔐 Tokens de API do Mesh **Proteções:** + - ✅ `.gitignore` configurado para ignorar `data/` - ✅ Permissões de arquivo devem ser restritas em produção - ⚠️ Considere criptografar em produção (futuro) @@ -186,6 +192,7 @@ O arquivo `./data/slack-kv.json` pode ser: ## 📊 Logs ### Inicialização + ``` [KV] 🚀 Initializing persistent KV store: ./data/slack-kv.json [KV] 📂 Loaded 3 entries from disk @@ -193,12 +200,14 @@ O arquivo `./data/slack-kv.json` pode ser: ``` ### Operações + ``` [KV] 💾 Saved 3 entries to disk [KV] 🧹 Cleaned up 2 expired entries ``` ### Shutdown + ``` [KV] 💾 Flushing to disk before shutdown... ``` @@ -218,11 +227,11 @@ O arquivo `./data/slack-kv.json` pode ser: // shared/storage/adapters/redis.ts export class RedisKVAdapter implements KVStore { constructor(private client: Redis) {} - + async get(key: string): Promise { return await this.client.get(key); } - + async set(key: string, value: T, ttlMs?: number): Promise { await this.client.set(key, value, { ex: ttlMs ? ttlMs / 1000 : undefined }); } @@ -232,6 +241,7 @@ export class RedisKVAdapter implements KVStore { ## ✅ Resultado Agora o Slack MCP sobrevive a: + - ✅ Reinícios (Ctrl+C + restart) - ✅ Hot reloads (`--hot`) - ✅ Crashes @@ -239,5 +249,3 @@ Agora o Slack MCP sobrevive a: - ✅ Server restarts **E o melhor:** as configurações são **persistidas automaticamente** sem intervenção manual! 🎉 - - diff --git a/slack-mcp/POSTGRESQL_MIGRATION.md b/slack-mcp/POSTGRESQL_MIGRATION.md index 65dc0e31..9dc5c325 100644 --- a/slack-mcp/POSTGRESQL_MIGRATION.md +++ b/slack-mcp/POSTGRESQL_MIGRATION.md @@ -54,6 +54,7 @@ Migrar do KV Store local (arquivo JSON) para PostgreSQL para suportar **deployme ### 1. Migration SQL (`migrations/001-slack-connections.ts`) Criada tabela `slack_connections` com: + - **Primary Key**: `connection_id` - **Índices**: `team_id`, `organization_id`, `updated_at` - **Campos**: Todos os dados de configuração (tokens, model IDs, etc.) @@ -80,6 +81,7 @@ CREATE TABLE slack_connections ( ### 2. Database Factory (`server/database/index.ts`) Adaptado do Mesh, suporta: + - **SQLite** (desenvolvimento local) - `sqlite://./data/slack.db` - **PostgreSQL** (produção K8s) - `postgresql://...` @@ -88,11 +90,11 @@ Auto-detecção pelo `DATABASE_URL`: ```typescript export function createDatabase(databaseUrl?: string): SlackDatabase { const config = parseDatabaseUrl(databaseUrl); - + if (config.type === "postgres") { return createPostgresDatabase(config); } - + return createSqliteDatabase(config); } ``` @@ -121,6 +123,7 @@ setCache(connectionId, config); ``` **Performance:** + - Cache hit: **~0.1ms** (memória) - Cache miss: **~2-10ms** (PostgreSQL) - 99% dos requests são cache hits @@ -195,38 +198,38 @@ kind: Deployment metadata: name: slack-mcp spec: - replicas: 3 # ✅ Múltiplos pods funcionam! + replicas: 3 # ✅ Múltiplos pods funcionam! template: spec: containers: - - name: slack-mcp - image: slack-mcp:latest - env: - - name: DATABASE_URL - valueFrom: - secretKeyRef: - name: slack-mcp-secrets - key: database-url - livenessProbe: - httpGet: - path: /health - port: 3003 - readinessProbe: - httpGet: - path: /health - port: 3003 + - name: slack-mcp + image: slack-mcp:latest + env: + - name: DATABASE_URL + valueFrom: + secretKeyRef: + name: slack-mcp-secrets + key: database-url + livenessProbe: + httpGet: + path: /health + port: 3003 + readinessProbe: + httpGet: + path: /health + port: 3003 ``` ## 📊 Performance -| Métrica | KV Local | PostgreSQL + Cache | Diferença | -|---------|----------|-------------------|-----------| -| **Read (cache hit)** | 0.1ms | 0.1ms | Igual | -| **Read (cache miss)** | 0.1ms | 2-10ms | +1-9ms | -| **Write** | 1-5ms (debounce) | 5-20ms | +4-15ms | -| **Multi-pod K8s** | ❌ QUEBRA | ✅ Funciona | Crítico | -| **Restart recovery** | ✅ Sim (disk) | ✅ Sim (DB) | Igual | -| **Cache hit rate** | N/A | 99% | Excelente | +| Métrica | KV Local | PostgreSQL + Cache | Diferença | +| --------------------- | ---------------- | ------------------ | --------- | +| **Read (cache hit)** | 0.1ms | 0.1ms | Igual | +| **Read (cache miss)** | 0.1ms | 2-10ms | +1-9ms | +| **Write** | 1-5ms (debounce) | 5-20ms | +4-15ms | +| **Multi-pod K8s** | ❌ QUEBRA | ✅ Funciona | Crítico | +| **Restart recovery** | ✅ Sim (disk) | ✅ Sim (DB) | Igual | +| **Cache hit rate** | N/A | 99% | Excelente | ## 🔧 Troubleshooting @@ -318,6 +321,7 @@ PORT=3003 DATABASE_URL=postgresql://localhost:5432/slack bun run dev ## 📚 Arquivos Modificados/Criados ### Novos Arquivos + - `migrations/001-slack-connections.ts` - Migration SQL - `migrations/index.ts` - Migration registry - `server/database/index.ts` - Database factory @@ -325,6 +329,7 @@ PORT=3003 DATABASE_URL=postgresql://localhost:5432/slack bun run dev - `POSTGRESQL_MIGRATION.md` - Este documento ### Modificados + - `server/lib/data.ts` - PostgreSQL adapter com cache - `server/types/env.ts` - DATABASE obrigatório - `server/health.ts` - Health check para PostgreSQL @@ -332,6 +337,7 @@ PORT=3003 DATABASE_URL=postgresql://localhost:5432/slack bun run dev - `package.json` - Dependências (kysely, pg) ### Mantidos (Sem Mudanças) + - `server/lib/kv.ts` - Ainda usado para threads temporárias - `server/router.ts` - Rotas inalteradas - `server/slack/handlers/` - Handlers inalterados @@ -349,8 +355,8 @@ Agora o Slack MCP funciona em **Kubernetes multi-pod**! 🚀 --- **Próximos Passos Opcionais:** + - [ ] Redis/Valkey para cache distribuído (eliminar TTL de 30s) - [ ] Connection pooling otimizado para PostgreSQL - [ ] Backup automático do banco de dados - [ ] Métricas Prometheus para cache hit rate - diff --git a/slack-mcp/QUICK_START.md b/slack-mcp/QUICK_START.md index 9aaba0f5..6689bdac 100644 --- a/slack-mcp/QUICK_START.md +++ b/slack-mcp/QUICK_START.md @@ -64,6 +64,7 @@ bun run dev ``` **Deve ver:** + ``` [Supabase] ✅ Client initialized successfully [Storage] Using Supabase for config persistence (multi-pod ready) @@ -93,4 +94,3 @@ Supabase Dashboard → Table Editor → slack_connections - `SUPABASE_SETUP_SIMPLE.md` - Setup detalhado - `ENV_VARS.md` - Todas as variáveis de ambiente - diff --git a/slack-mcp/README.md b/slack-mcp/README.md index dbfb0a16..1d949a21 100644 --- a/slack-mcp/README.md +++ b/slack-mcp/README.md @@ -1,5 +1,5 @@ -# Slack MCP - +# Slack MCP + MCP para integração com Slack, incluindo bot inteligente com gerenciamento de threads, comandos de AI agent e suporte a webhooks. ## Features @@ -64,6 +64,7 @@ https://mesh.deco.cx/webhooks/{connectionId} ``` Esta URL: + - ✅ Responde automaticamente ao challenge de verificação do Slack - ✅ Verifica assinaturas usando seu `SIGNING_SECRET` - ✅ Publica eventos no Event Bus para processamento pelo MCP @@ -71,18 +72,19 @@ Esta URL: ## Campos de Configuração -| Campo | Obrigatório | Descrição | -|-------|-------------|-----------| -| BOT_TOKEN | ✅ | Token do bot (xoxb-...) | -| SIGNING_SECRET | ✅ | Signing Secret para verificar webhooks | -| LOG_CHANNEL_ID | ❌ | Canal para logs do bot | -| THREAD_TIMEOUT_MIN | ❌ | Timeout de inatividade em minutos (padrão: 10) | -| ALLOWED_CHANNELS | ❌ | IDs de canais permitidos (separados por vírgula) | -| ENABLE_STREAMING | ❌ | Habilitar respostas em streaming (padrão: true) | +| Campo | Obrigatório | Descrição | +| ------------------ | ----------- | ------------------------------------------------ | +| BOT_TOKEN | ✅ | Token do bot (xoxb-...) | +| SIGNING_SECRET | ✅ | Signing Secret para verificar webhooks | +| LOG_CHANNEL_ID | ❌ | Canal para logs do bot | +| THREAD_TIMEOUT_MIN | ❌ | Timeout de inatividade em minutos (padrão: 10) | +| ALLOWED_CHANNELS | ❌ | IDs de canais permitidos (separados por vírgula) | +| ENABLE_STREAMING | ❌ | Habilitar respostas em streaming (padrão: true) | ## Tools Disponíveis ### Mensagens + - `SLACK_SEND_MESSAGE` - Enviar mensagem para canal ou thread - `SLACK_REPLY_IN_THREAD` - Responder em uma thread - `SLACK_EDIT_MESSAGE` - Editar mensagem existente @@ -92,12 +94,14 @@ Esta URL: - `SLACK_SEARCH_MESSAGES` - Buscar mensagens ### Canais + - `SLACK_LIST_CHANNELS` - Listar canais do workspace - `SLACK_GET_CHANNEL_INFO` - Informações de um canal - `SLACK_JOIN_CHANNEL` - Entrar em um canal - `SLACK_GET_CHANNEL_MEMBERS` - Listar membros de um canal ### Usuários e Reactions + - `SLACK_GET_USER_INFO` - Informações de um usuário - `SLACK_LIST_USERS` - Listar usuários do workspace - `SLACK_GET_BOT_INFO` - Informações do bot @@ -105,6 +109,7 @@ Esta URL: - `SLACK_REMOVE_REACTION` - Remover reaction ### Setup e Debug + - `SLACK_GET_BOT_STATUS` - Status do bot - `SLACK_GET_THREAD_INFO` - Info de thread lógica - `SLACK_RESET_THREAD` - Resetar contexto de thread @@ -115,11 +120,13 @@ Esta URL: O MCP resolve o problema comum de mistura de contextos em bots do Slack: ### Problema Anterior + - Cada canal era tratado como uma thread lógica - Múltiplas @mentions no mesmo canal misturavam contextos - Era necessário criar novos canais para "resetar" o bot ### Solução Implementada + - Cada `@mention` ao bot cria uma **nova thread lógica** - O identificador da thread usa `message.ts` (timestamp da mensagem), não o `channel_id` - Respostas na mesma thread do Slack mantêm o contexto compartilhado @@ -159,6 +166,7 @@ Este MCP usa o **Mesh Universal Webhook Proxy** para receber eventos: ``` **Benefícios:** + - Cada organização tem sua própria URL - O MCP não precisa expor endpoints HTTP - Verificação de assinatura centralizada no Mesh @@ -179,4 +187,3 @@ O deploy é automático via GitHub Actions quando há push para `main`. ```bash bun run scripts/deploy.ts slack-mcp ``` - diff --git a/slack-mcp/SUPABASE_SETUP_SIMPLE.md b/slack-mcp/SUPABASE_SETUP_SIMPLE.md index 4f47b304..44be9b59 100644 --- a/slack-mcp/SUPABASE_SETUP_SIMPLE.md +++ b/slack-mcp/SUPABASE_SETUP_SIMPLE.md @@ -53,11 +53,13 @@ CREATE INDEX idx_slack_connections_updated_at ON slack_connections(updated_at); 2. Copie os dois valores: **Project URL:** + ``` https://xxxxxxxxxxxxx.supabase.co ``` **anon/public key:** + ``` eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6InN... ``` @@ -88,6 +90,7 @@ bun run dev ``` **Deve ver:** + ``` [Supabase] ✅ Client initialized successfully [Storage] Using Supabase for config persistence (multi-pod ready) @@ -99,6 +102,7 @@ bun run dev ## ✅ Pronto! Agora o Slack MCP vai salvar todas as configurações no Supabase automaticamente. Funciona em: + - ✅ Local com `deco link` - ✅ Deploy produção - ✅ Múltiplos pods no Kubernetes @@ -116,6 +120,7 @@ Agora o Slack MCP vai salvar todas as configurações no Supabase automaticament ## 💰 Custo **$0/mês** - Supabase Free Tier: + - 500MB storage - 500MB bandwidth/mês - Unlimited API requests @@ -126,13 +131,16 @@ Agora o Slack MCP vai salvar todas as configurações no Supabase automaticament ## 🆘 Problemas? ### Erro: "Failed to initialize client" + - Verifique se SUPABASE_URL e SUPABASE_ANON_KEY estão corretos - Teste no browser: abra SUPABASE_URL (deve abrir página do Supabase) ### Erro: "relation slack_connections does not exist" + - Execute o SQL do passo 2 novamente no SQL Editor ### Configs não aparecem + - Verifique logs: `[Supabase] 💾 Saved connection config` - Abra Table Editor e veja se tem dados @@ -143,4 +151,3 @@ Agora o Slack MCP vai salvar todas as configurações no Supabase automaticament - Dashboard: https://supabase.com/dashboard - Docs: https://supabase.com/docs - Status: https://status.supabase.com - diff --git a/slack-mcp/app.json b/slack-mcp/app.json index e326f2cb..5e2f7318 100644 --- a/slack-mcp/app.json +++ b/slack-mcp/app.json @@ -12,7 +12,17 @@ "metadata": { "categories": ["Communication", "Automation"], "official": false, - "tags": ["slack", "bot", "messaging", "ai-agent", "webhooks", "automation", "threads", "channels", "workspace"], + "tags": [ + "slack", + "bot", + "messaging", + "ai-agent", + "webhooks", + "automation", + "threads", + "channels", + "workspace" + ], "short_description": "Slack bot with intelligent thread management, AI agent commands, and webhook support.", "mesh_description": "The Slack MCP provides comprehensive integration with Slack workspaces, enabling AI agents to interact with channels, messages, and users. This MCP includes intelligent thread management that solves the common problem of context mixing - each @mention to the bot creates a new logical conversation thread, while replies in the same Slack thread maintain shared context. AI agents can send, edit, and delete messages; reply in threads; search message history; list and join channels; get user information; and add/remove reactions. The bot supports both channel messages (via @mentions) and direct messages, with configurable channel permissions. Built-in webhook handling processes Slack events (messages, reactions, channel events) and publishes them to the Event Bus for integration with other MCPs. The MCP uses timeout-based thread reset (default 10 minutes) to automatically clear conversation context after periods of inactivity. Ideal for building intelligent Slack assistants, automating workspace workflows, or integrating AI capabilities into team communication." } diff --git a/slack-mcp/package.json b/slack-mcp/package.json index e0f4b03c..e619d271 100644 --- a/slack-mcp/package.json +++ b/slack-mcp/package.json @@ -1,9 +1,9 @@ { "name": "slack-mcp", "version": "1.0.0", - "type": "module", "private": true, "description": "Slack bot with intelligent thread management, AI agent commands, and webhook support", + "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", "reset-db": "bun run scripts/reset-db.ts", @@ -31,8 +31,8 @@ }, "devDependencies": { "@decocms/mcps-shared": "1.0.0", - "@types/pg": "^8.16.0", "@types/ioredis": "^5.0.0", + "@types/pg": "^8.16.0", "deco-cli": "^0.28.0", "typescript": "^5.7.2" }, diff --git a/slack-mcp/tsconfig.json b/slack-mcp/tsconfig.json index cf5c6b05..72f9b90c 100644 --- a/slack-mcp/tsconfig.json +++ b/slack-mcp/tsconfig.json @@ -2,9 +2,7 @@ "compilerOptions": { "target": "ES2022", "useDefineForClassFields": true, - "lib": [ - "ES2023" - ], + "lib": ["ES2023"], "module": "ESNext", "skipLibCheck": true, /* Bundler mode */ @@ -24,16 +22,10 @@ /* Path Aliases */ "baseUrl": ".", "paths": { - "server/*": [ - "./server/*" - ] + "server/*": ["./server/*"] }, /* Types */ - "types": [ - "@types/node" - ] + "types": ["@types/node"] }, - "include": [ - "server" - ] + "include": ["server"] } diff --git a/sora/package.json b/sora/package.json index 3ce2a9e9..fb980164 100644 --- a/sora/package.json +++ b/sora/package.json @@ -1,8 +1,8 @@ { "name": "sora", "version": "1.0.0", - "description": "Sora 2 MCP", "private": true, + "description": "Sora 2 MCP", "type": "module", "scripts": { "dev": "deco dev --vite", diff --git a/sora/tsconfig.json b/sora/tsconfig.json index c5b23929..392b6275 100644 --- a/sora/tsconfig.json +++ b/sora/tsconfig.json @@ -34,9 +34,5 @@ /* Types */ "types": ["@cloudflare/workers-types"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ] + "include": ["server", "shared", "vite.config.ts"] } diff --git a/sora/wrangler.toml b/sora/wrangler.toml index f34503e9..a9ba7577 100644 --- a/sora/wrangler.toml +++ b/sora/wrangler.toml @@ -2,7 +2,7 @@ name = "sora" main = "server/main.ts" compatibility_date = "2025-06-17" -compatibility_flags = [ "nodejs_compat" ] +compatibility_flags = ["nodejs_compat"] scope = "deco" [deco] diff --git a/stack-overflow/README.md b/stack-overflow/README.md index b67f7c24..aedf9680 100644 --- a/stack-overflow/README.md +++ b/stack-overflow/README.md @@ -14,9 +14,9 @@ Stack Overflow MCP is a standardized "front door" for AI agents to discover and It enables your AI to: -* **🔍 Search:** Search relevant technical questions and answers. -* **📖 Retrieve:** Retrieve full conversation threads, including accepted solutions and comments. -* **🧠 Ground:** Ground answers in community-verified data to reduce hallucinations. +- **🔍 Search:** Search relevant technical questions and answers. +- **📖 Retrieve:** Retrieve full conversation threads, including accepted solutions and comments. +- **🧠 Ground:** Ground answers in community-verified data to reduce hallucinations. All through natural language — just describe your technical problem and let it do the heavy lifting for you. @@ -24,10 +24,10 @@ All through natural language — just describe your technical problem and let it ## ⚡ Key Features -* **Trusted Knowledge** – Access the vast library of developer solutions directly within your AI workflow. -* **Real-Time Data** – Fetch the latest discussions and answers without relying on outdated data. -* **Standardized Access** – No bespoke API integration; use a unified protocol compatible with standard MCP clients. -* **Secure Authentication** – Simple OAuth flow connects securely to your existing Stack Overflow account. +- **Trusted Knowledge** – Access the vast library of developer solutions directly within your AI workflow. +- **Real-Time Data** – Fetch the latest discussions and answers without relying on outdated data. +- **Standardized Access** – No bespoke API integration; use a unified protocol compatible with standard MCP clients. +- **Secure Authentication** – Simple OAuth flow connects securely to your existing Stack Overflow account. --- @@ -35,8 +35,8 @@ All through natural language — just describe your technical problem and let it ### Prerequisites -* An MCP-compatible client (e.g., Claude Desktop, Cursor, Visual Studio Code, etc). -* A Stack Overflow account ([free to join](https://stackoverflow.com/users/signup)). +- An MCP-compatible client (e.g., Claude Desktop, Cursor, Visual Studio Code, etc). +- A Stack Overflow account ([free to join](https://stackoverflow.com/users/signup)). Read the **[official documentation](https://api.stackexchange.com/docs/mcp-server)** to set up the Stack Overflow MCP Server in your client. @@ -48,5 +48,5 @@ Read the **[official documentation](https://api.stackexchange.com/docs/mcp-serve This is a beta server, and your input is critical to its evolution. -* **Support:** Use our [contact form](https://stackoverflow.com/contact) -* **Feedback:** Share your thoughts in our [feedback form](https://forms.gle/oTLpxPQ6CGfS4Axp7) +- **Support:** Use our [contact form](https://stackoverflow.com/contact) +- **Feedback:** Share your thoughts in our [feedback form](https://forms.gle/oTLpxPQ6CGfS4Axp7) diff --git a/stack-overflow/app.json b/stack-overflow/app.json index 3fafcf07..ac57b5df 100644 --- a/stack-overflow/app.json +++ b/stack-overflow/app.json @@ -13,7 +13,14 @@ "metadata": { "categories": ["Development", "Data"], "official": true, - "tags": ["stack-overflow", "technical-questions", "answers", "knowledge-base", "programming", "solutions"], + "tags": [ + "stack-overflow", + "technical-questions", + "answers", + "knowledge-base", + "programming", + "solutions" + ], "short_description": "Access Stack Overflow's trusted and verified technical questions and answers", "mesh_description": "Provides access to Stack Overflow's trusted and verified technical questions and answers. This MCP enables users to retrieve and utilize the vast knowledge base of Stack Overflow directly within their AI workflows. Search for programming solutions, best practices, and technical guidance from the world's largest developer community. Perfect for enhancing AI-powered development tools with accurate, community-vetted programming knowledge." } diff --git a/stilla-ai/app.json b/stilla-ai/app.json index 6fa9b4fb..98587a39 100644 --- a/stilla-ai/app.json +++ b/stilla-ai/app.json @@ -17,4 +17,3 @@ "mesh_description": "Stilla AI MCP provides intelligent AI-powered assistance for various productivity and automation tasks. This MCP server offers access to advanced AI capabilities that help streamline workflows, automate repetitive tasks, and enhance productivity. Whether you need help with data processing, content generation, task management, or workflow automation, Stilla AI provides the tools and capabilities to get things done efficiently." } } - diff --git a/strapi/package.json b/strapi/package.json index e899cb26..74614548 100644 --- a/strapi/package.json +++ b/strapi/package.json @@ -1,8 +1,8 @@ { "name": "strapi", "version": "1.0.0", - "description": "Strapi CMS MCP Server - Content management and API integration", "private": true, + "description": "Strapi CMS MCP Server - Content management and API integration", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/strapi/server/.deco/preferences.json b/strapi/server/.deco/preferences.json index f6560c94..649e3be3 100644 --- a/strapi/server/.deco/preferences.json +++ b/strapi/server/.deco/preferences.json @@ -1,4 +1,4 @@ { "mcp-install-version-7e84c9e8": "3a1f730440ee2aad49233e96149e1ff2add793ad", "mcp-install-version-af6f7047": "3a1f730440ee2aad49233e96149e1ff2add793ad" -} \ No newline at end of file +} diff --git a/stripe-official/README.md b/stripe-official/README.md index a7de393a..53f489ba 100644 --- a/stripe-official/README.md +++ b/stripe-official/README.md @@ -25,4 +25,3 @@ https://dashboard.stripe.com/apikeys - **GitHub**: https://github.com/stripe/agent-toolkit - **Website**: https://stripe.com - **Documentation**: https://stripe.com/docs/api - diff --git a/stripe-official/app.json b/stripe-official/app.json index 5a429b24..62bbe01c 100644 --- a/stripe-official/app.json +++ b/stripe-official/app.json @@ -13,9 +13,17 @@ "metadata": { "categories": ["E-commerce", "Payments", "Finance"], "official": true, - "tags": ["stripe", "payments", "ecommerce", "subscriptions", "billing", "invoices", "customers", "finance"], + "tags": [ + "stripe", + "payments", + "ecommerce", + "subscriptions", + "billing", + "invoices", + "customers", + "finance" + ], "short_description": "Official Stripe MCP - Manage payments, subscriptions, and billing operations through Stripe's API", "mesh_description": "The Stripe MCP provides comprehensive integration with Stripe's payment processing platform, enabling AI agents to manage complete payment and billing workflows. This official MCP allows you to create and manage customers with payment methods and billing details, process one-time and recurring payments securely, manage products, prices, and subscription plans, generate and send invoices automatically, handle refunds and disputes, track payment status and webhooks, and access detailed financial reporting and analytics. Key features include full customer lifecycle management, flexible subscription billing with usage-based pricing, automated invoice generation and payment collection, support for multiple payment methods (cards, bank transfers, wallets), handling of multi-currency transactions, fraud detection and prevention tools, and comprehensive webhook integration for real-time events. Perfect for building e-commerce platforms, SaaS billing systems, marketplace payment flows, or AI agents that can manage the complete payment lifecycle from customer onboarding to revenue reporting. The MCP supports secure API key authentication and provides access to all Stripe features through natural language commands." } } - diff --git a/supabase-official/README.md b/supabase-official/README.md index a0288ebb..4efe08ab 100644 --- a/supabase-official/README.md +++ b/supabase-official/README.md @@ -26,4 +26,3 @@ https://supabase.com/dashboard/account/tokens - **Website**: https://supabase.com - **Documentation**: https://supabase.com/docs - **MCP Page**: https://supabase.com/mcp - diff --git a/supabase-official/app.json b/supabase-official/app.json index 8a3f3642..d952cc3f 100644 --- a/supabase-official/app.json +++ b/supabase-official/app.json @@ -13,9 +13,17 @@ "metadata": { "categories": ["Database", "Development", "Backend"], "official": true, - "tags": ["supabase", "postgres", "database", "sql", "edge-functions", "backend", "api", "authentication"], + "tags": [ + "supabase", + "postgres", + "database", + "sql", + "edge-functions", + "backend", + "api", + "authentication" + ], "short_description": "Official Supabase MCP - Manage Postgres databases, Edge Functions, and platform features", "mesh_description": "The Supabase MCP provides comprehensive integration with the Supabase platform, enabling full database and backend management. This official MCP allows you to execute SQL queries and manage Postgres databases, apply migrations with version control, deploy and manage Edge Functions (Deno-based serverless), create and manage development branches for safe testing, generate TypeScript types from database schemas, monitor database performance with advisor recommendations, access and analyze application logs, manage database extensions and features, and configure authentication and storage services. Key features include full SQL execution capabilities, migration management with apply/list operations, Edge Function deployment and retrieval, branch creation for isolated development environments, advisor system for security and performance checks, log access across all Supabase services, and TypeScript type generation for type-safe development. Perfect for building full-stack applications, managing database schemas, deploying serverless functions, and automating Supabase workflows. The MCP supports OAuth authentication and provides natural language access to all major Supabase platform features." } } - diff --git a/teamwork/README.md b/teamwork/README.md index 46be089a..40c54195 100644 --- a/teamwork/README.md +++ b/teamwork/README.md @@ -45,6 +45,7 @@ Production-ready HTTP server for cloud deployments and multi-client support. **📖 [Full HTTP Server Documentation](https://github.com/nicholasgriffintn/teamwork-mcpserver/blob/main/cmd/mcp-http/README.md)** Quick start: + ```bash TW_MCP_SERVER_ADDRESS=:8080 go run cmd/mcp-http/main.go ``` @@ -56,6 +57,7 @@ Direct STDIO interface for desktop applications and development environments. **📖 [Full STDIO Server Documentation](cmd/mcp-stdio/README.md)** Quick start: + ```bash TW_MCP_BEARER_TOKEN=your-token go run cmd/mcp-stdio/main.go ``` @@ -67,6 +69,7 @@ Command-line tool for testing and debugging MCP server functionality. **📖 [Full HTTP CLI Documentation](cmd/mcp-http-cli/README.md)** Quick start: + ```bash go run cmd/mcp-http-cli/main.go -mcp-url=https://mcp.example.com list-tools ``` @@ -79,6 +82,7 @@ go run cmd/mcp-http-cli/main.go -mcp-url=https://mcp.example.com list-tools ## 🧪 Development & Testing ### Running Tests + ```bash # Run all tests go test ./... @@ -88,6 +92,7 @@ go test ./internal/twprojects/ ``` ### MCP Inspector + For debugging purposes, use the [MCP Inspector tool](https://github.com/modelcontextprotocol/inspector): ```bash diff --git a/template-minimal/README.md b/template-minimal/README.md index f14d8cfa..fff366a1 100644 --- a/template-minimal/README.md +++ b/template-minimal/README.md @@ -46,6 +46,7 @@ After creating your MCP: - See `.cursor/rules/app-json-schema.mdc` for complete schema 4. **Test locally** + ```bash bun run dev ``` @@ -63,6 +64,7 @@ After creating your MCP: ## Examples Check these MCPs for reference: + - **Simple**: `perplexity/` - API-only MCP - **Google OAuth**: `google-calendar/` - OAuth + API - **Complex Config**: `slack-mcp/` - Organized StateSchema @@ -74,4 +76,3 @@ Check these MCPs for reference: - [StateSchema Patterns](.cursor/rules/mcp-creation.mdc#stateschema-organization-pattern) - [Bindings Guide](.cursor/rules/bindings.mdc) - [app.json Schema](.cursor/rules/app-json-schema.mdc) - diff --git a/template-minimal/app.json b/template-minimal/app.json index ba57100a..2a0179e4 100644 --- a/template-minimal/app.json +++ b/template-minimal/app.json @@ -10,4 +10,3 @@ "icon": "https://assets.decocache.com/mcp/template.svg", "unlisted": true } - diff --git a/template-minimal/package.json b/template-minimal/package.json index d53f9d3b..308cb369 100644 --- a/template-minimal/package.json +++ b/template-minimal/package.json @@ -1,8 +1,8 @@ { "name": "mcp-template-minimal", "version": "1.0.0", - "description": "Minimal template for MCP server", "private": true, + "description": "Minimal template for MCP server", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/thoughtspot/README.md b/thoughtspot/README.md index b75cc1a1..1a0af25b 100644 --- a/thoughtspot/README.md +++ b/thoughtspot/README.md @@ -4,8 +4,7 @@
-# ThoughtSpot MCP Server
![MCP Server](https://badge.mcpx.dev?type=server 'MCP Server') ![Static Badge](https://img.shields.io/badge/cloudflare%20worker-deployed-green?link=https%3A%2F%2Fdash.cloudflare.com%2F485d90aa3d1ea138ad7ede769fe2c35e%2Fworkers%2Fservices%2Fview%2Fthoughtspot-mcp-server%2Fproduction%2Fmetrics) ![GitHub branch check runs](https://img.shields.io/github/check-runs/thoughtspot/mcp-server/main) [![Coverage Status](https://coveralls.io/repos/github/thoughtspot/mcp-server/badge.svg?branch=main)](https://coveralls.io/github/thoughtspot/mcp-server?branch=main) Discord: ThoughtSpot - +# ThoughtSpot MCP Server
![MCP Server](https://badge.mcpx.dev?type=server "MCP Server") ![Static Badge](https://img.shields.io/badge/cloudflare%20worker-deployed-green?link=https%3A%2F%2Fdash.cloudflare.com%2F485d90aa3d1ea138ad7ede769fe2c35e%2Fworkers%2Fservices%2Fview%2Fthoughtspot-mcp-server%2Fproduction%2Fmetrics) ![GitHub branch check runs](https://img.shields.io/github/check-runs/thoughtspot/mcp-server/main) [![Coverage Status](https://coveralls.io/repos/github/thoughtspot/mcp-server/badge.svg?branch=main)](https://coveralls.io/github/thoughtspot/mcp-server?branch=main) Discord: ThoughtSpot The ThoughtSpot MCP Server provides secure OAuth-based authentication and a set of tools for querying and retrieving relevant data from your ThoughtSpot instance. It's a remote server hosted on Cloudflare. @@ -34,19 +33,20 @@ Join our [Discord](https://developers.thoughtspot.com/join-discord) to get suppo - [Local Development](#local-development) - [Endpoints](#endpoints) - ## Connect If using a client which supports remote MCPs natively (Claude.ai etc) then just enter: -MCP Server URL: +MCP Server URL: ``` https://agent.thoughtspot.app/mcp ``` + Preferred Auth method: Oauth - For OpenAI ChatGPT Deep Research, add the URL as: + ```js https://agent.thoughtspot.app/openai/mcp ``` @@ -58,10 +58,7 @@ To configure this MCP server in your MCP client (such as Claude Desktop, Windsur "mcpServers": { "ThoughtSpot": { "command": "npx", - "args": [ - "mcp-remote", - "https://agent.thoughtspot.app/mcp" - ] + "args": ["mcp-remote", "https://agent.thoughtspot.app/mcp"] } } } @@ -87,7 +84,7 @@ Watch on [Loom](https://www.loom.com/share/433988d98a7b41fb8df2239da014169a?sid= ## Usage in APIs -ThoughtSpot's remote MCP server can be used in LLM APIs which support calling MCP tools. +ThoughtSpot's remote MCP server can be used in LLM APIs which support calling MCP tools. Here are examples with the common LLM providers: @@ -116,7 +113,6 @@ curl https://api.openai.com/v1/responses \ More details on how can you use OpenAI API with MCP tool calling can be found [here](https://platform.openai.com/docs/guides/tools-remote-mcp). - ### Claude MCP Connector ```bash @@ -129,7 +125,7 @@ curl https://api.anthropic.com/v1/messages \ "model": "claude-sonnet-4-20250514", "max_tokens": 1000, "messages": [{ - "role": "user", + "role": "user", "content": "How do I increase my sales ?" }], "mcp_servers": [ @@ -147,33 +143,33 @@ Note: In the `authorization_token` field we have suffixed the ThoughtSpot instan More details on Claude MCP connector [here](https://docs.anthropic.com/en/docs/agents-and-tools/mcp-connector). - ### Gemini API MCP tools can be used with the Gemini Python/Typescript SDK. Here is an example using typescript: ```typescript -import { GoogleGenAI, FunctionCallingConfigMode , mcpToTool} from '@google/genai'; +import { GoogleGenAI, FunctionCallingConfigMode, mcpToTool } from "@google/genai"; import { Client } from "@modelcontextprotocol/sdk/client/index.js"; import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js"; // Create server parameters for stdio connection -const serverParams = new StreamableHTTPClientTransport(new URL("https://agent.thoughtspot.app/bearer/mcp"), { - requestInit: { - headers: { - "Authorization": "Bearer $TS_AUTH_TOKEN", // Read below how to get the $TS_AUTH_TOKEN - "x-ts-host": "my-thoughtspot-instance.thoughtspot.cloud" - }, - } -}); - -const client = new Client( +const serverParams = new StreamableHTTPClientTransport( + new URL("https://agent.thoughtspot.app/bearer/mcp"), { - name: "example-client", - version: "1.0.0" - } + requestInit: { + headers: { + Authorization: "Bearer $TS_AUTH_TOKEN", // Read below how to get the $TS_AUTH_TOKEN + "x-ts-host": "my-thoughtspot-instance.thoughtspot.cloud", + }, + }, + }, ); +const client = new Client({ + name: "example-client", + version: "1.0.0", +}); + // Configure the client const ai = new GoogleGenAI({}); @@ -185,14 +181,14 @@ const response = await ai.models.generateContent({ model: "gemini-2.5-flash", contents: `What is the weather in London in ${new Date().toLocaleDateString()}?`, config: { - tools: [mcpToTool(client)], // uses the session, will automatically call the tool + tools: [mcpToTool(client)], // uses the session, will automatically call the tool // Uncomment if you **don't** want the sdk to automatically call the tool // automaticFunctionCalling: { // disable: true, // }, }, }); -console.log(response.text) +console.log(response.text); // Close the connection await client.close(); @@ -212,11 +208,9 @@ gemini extensions install https://github.com/thoughtspot/mcp-server Read more about Gemini CLI [here](https://github.com/google-gemini/gemini-cli). - ### How to get TS_AUTH_TOKEN for APIs ? -For API usage, you would the token endpoints with a `secret_key` to generate the `API_TOKEN` for a specific user/role, more details [here](https://developers.thoughtspot.com/docs/api-authv2#trusted-auth-v2). - +For API usage, you would the token endpoints with a `secret_key` to generate the `API_TOKEN` for a specific user/role, more details [here](https://developers.thoughtspot.com/docs/api-authv2#trusted-auth-v2). ## Features @@ -230,14 +224,13 @@ For API usage, you would the token endpoints with a `secret_key` to generate the - `createLiveboard`: Create a liveboard from a list of answers. - `getDataSourceSuggestions`: Get datasource suggestions for a given query. - **MCP Resources**: - - `datasources`: List of ThoughtSpot Data models the user has access to. + - `datasources`: List of ThoughtSpot Data models the user has access to. ### Supported transports - SSE: https://agent.thoughtspot.app/sse - Streamed HTTP: https://agent.thoughtspot.app/mcp - ## Manual client registration For MCP hosts which do not(yet) support Dynamic client registration, or they require statically adding Oauth Client Id etc. Go to [this](https://agent.thoughtspot.app/clients) page, to register a new client and copy the details over. The most relevant values are `Oauth Client Id` and `Oauth Client Secret` which should be added when adding ThoughtSpot as an MCP connector in the MCP client (ChatGPT/Claude etc). The generated client details are only available when they are generated and are NOT available later for reference. @@ -257,6 +250,7 @@ Manual client registration also allows to associate with a specific ThoughtSpot - You should get a token in the response, thats the bearer token. #### Alternative way to get `TS_AUTH_TOKEN` + - Login to the ThoughtSpot instance as you would normally. - Opem in a new tab this URL: - https://your-ts-instance/api/rest/2.0/auth/session/token @@ -269,19 +263,19 @@ Manual client registration also allows to associate with a specific ThoughtSpot Make sure to add the following entries in your ThoughtSpot instance: -*CORS* +_CORS_ - Go to ThoughtSpot => _Develop_ => Security settings - Click "Edit" -- Add "agent.thoughtspot.app" to the the "CORS whitelisted domains". +- Add "agent.thoughtspot.app" to the the "CORS whitelisted domains". -*SAML* (need to be Admin) +_SAML_ (need to be Admin) - Go to ThoughtSpot => _Develop_ - Go to "All Orgs" Tab on the left panel if there is one. - Click "Security settings" - Click "Edit" -- Add "agent.thoughtspot.app" to the the "SAML redirect domains". +- Add "agent.thoughtspot.app" to the the "SAML redirect domains". > MCP server install error due to node issues diff --git a/thousandeyes/README.md b/thousandeyes/README.md index a4b8004f..e59c4f43 100644 --- a/thousandeyes/README.md +++ b/thousandeyes/README.md @@ -7,6 +7,7 @@ ### Purpose This MCP server allows client applications to: + - Query network path and performance data from ThousandEyes agents - Monitor internet outages, BGP changes, and routing anomalies - Analyze end-user experience and application delivery metrics diff --git a/tiktok-ads/README.md b/tiktok-ads/README.md index 4277e26e..88cc9fe1 100644 --- a/tiktok-ads/README.md +++ b/tiktok-ads/README.md @@ -5,24 +5,28 @@ MCP Server for TikTok Marketing API integration. Manage campaigns, ad groups, ad ## Features ### Campaign Management + - **list_campaigns** - List all campaigns with filters - **get_campaign** - Get details of a specific campaign - **create_campaign** - Create a new campaign - **update_campaign** - Update existing campaign ### Ad Group Management + - **list_adgroups** - List ad groups with filters - **get_adgroup** - Get details of a specific ad group - **create_adgroup** - Create a new ad group - **update_adgroup** - Update existing ad group ### Ad Management + - **list_ads** - List ads with filters - **get_ad** - Get details of a specific ad - **create_ad** - Create a new ad - **update_ad** - Update existing ad ### Reports & Analytics + - **get_report** - Get custom performance reports - **get_campaign_report** - Get campaign performance metrics - **get_adgroup_report** - Get ad group performance metrics @@ -206,28 +210,28 @@ tiktok-ads/ ## Campaign Objectives -| Objective | Description | -|-----------|-------------| -| TRAFFIC | Drive traffic to your website | -| APP_PROMOTION | Promote app installs and engagement | -| WEB_CONVERSIONS | Drive website conversions | -| PRODUCT_SALES | Sell products from a catalog | -| REACH | Maximize reach to your audience | -| VIDEO_VIEWS | Get more video views | -| LEAD_GENERATION | Collect leads from forms | -| COMMUNITY_INTERACTION | Increase profile engagement | +| Objective | Description | +| --------------------- | ----------------------------------- | +| TRAFFIC | Drive traffic to your website | +| APP_PROMOTION | Promote app installs and engagement | +| WEB_CONVERSIONS | Drive website conversions | +| PRODUCT_SALES | Sell products from a catalog | +| REACH | Maximize reach to your audience | +| VIDEO_VIEWS | Get more video views | +| LEAD_GENERATION | Collect leads from forms | +| COMMUNITY_INTERACTION | Increase profile engagement | ## Optimization Goals -| Goal | Description | -|------|-------------| -| CLICK | Optimize for clicks | -| CONVERT | Optimize for conversions | -| SHOW | Optimize for impressions | -| REACH | Optimize for unique reach | -| VIDEO_VIEW | Optimize for video views | +| Goal | Description | +| --------------- | ---------------------------------- | +| CLICK | Optimize for clicks | +| CONVERT | Optimize for conversions | +| SHOW | Optimize for impressions | +| REACH | Optimize for unique reach | +| VIDEO_VIEW | Optimize for video views | | LEAD_GENERATION | Optimize for lead form submissions | -| ENGAGEMENT | Optimize for profile engagement | +| ENGAGEMENT | Optimize for profile engagement | ## Report Metrics @@ -262,6 +266,7 @@ Este MCP usa autenticação via Access Token direto, gerado no TikTok Developer - ✅ **Simples de configurar** - Basta gerar o token e colar na instalação Para gerar o token: + 1. Acesse [TikTok Developer Portal](https://business-api.tiktok.com/portal/apps/) 2. Vá em "Tools" > "Access Token" 3. Gere e copie o token @@ -269,10 +274,10 @@ Para gerar o token: ## API Reference This MCP uses the TikTok Marketing API v1.3: + - Base URL: `https://business-api.tiktok.com/open_api/v1.3/` - [Official Documentation](https://business-api.tiktok.com/marketing_api/docs) ## License MIT - diff --git a/tiktok-ads/app.json b/tiktok-ads/app.json index 4033ff99..be1b60df 100644 --- a/tiktok-ads/app.json +++ b/tiktok-ads/app.json @@ -17,4 +17,3 @@ "mesh_description": "The TikTok Ads MCP provides comprehensive integration with TikTok Marketing API, enabling full programmatic control over advertising campaigns. This MCP allows AI agents to create, read, update campaigns, ad groups, and individual ads, analyze performance metrics, manage audiences, and generate detailed reports. It supports advanced advertising features including campaign budget optimization, targeting configuration, bid strategies, and performance tracking across multiple advertisers. The integration is perfect for building intelligent advertising assistants, automated campaign managers, and performance optimization tools. Ideal for marketers and agencies who need to automate TikTok advertising workflows, integrate campaign management into business processes, or build advertising-aware applications. Provides secure authentication for API access." } } - diff --git a/tiktok-ads/package.json b/tiktok-ads/package.json index f62463da..4fb9461b 100644 --- a/tiktok-ads/package.json +++ b/tiktok-ads/package.json @@ -1,8 +1,8 @@ { "name": "tiktok-ads", "version": "1.0.0", - "description": "TikTok Ads MCP Server - Manage campaigns, ad groups, ads and reports", "private": true, + "description": "TikTok Ads MCP Server - Manage campaigns, ad groups, ads and reports", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/tiktok-ads/tsconfig.json b/tiktok-ads/tsconfig.json index 3819d252..84f1d319 100644 --- a/tiktok-ads/tsconfig.json +++ b/tiktok-ads/tsconfig.json @@ -25,13 +25,6 @@ }, "types": ["bun-types"] }, - "include": [ - "server/**/*.ts", - "shared/**/*.ts" - ], - "exclude": [ - "node_modules", - "dist" - ] + "include": ["server/**/*.ts", "shared/**/*.ts"], + "exclude": ["node_modules", "dist"] } - diff --git a/todoist/README.md b/todoist/README.md index 59e7caf1..8abb8a5d 100644 --- a/todoist/README.md +++ b/todoist/README.md @@ -7,6 +7,7 @@ ### Purpose This MCP server allows client applications to: + - Create, update, and complete tasks in Todoist - Manage projects, labels, and priorities - Automate task management workflows with AI diff --git a/trunk/README.md b/trunk/README.md index f9a97786..7bd2e9e3 100644 --- a/trunk/README.md +++ b/trunk/README.md @@ -17,12 +17,12 @@ Trunk Flaky Tests comes with a [Model Context Protocol (MCP)](https://modelconte ### Supported AI Applications -| Application | Supported | Guide | Plugin | -| ---------------------------------------------------------------------------------------------------- | --------- | ------------------------------------------------------------------------------------------------- | --------------------------------------------------------------- | -| [Cursor](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/cursor-ide) | Yes | [Setup guide](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/cursor-ide) | [Cursor plugin](https://github.com/trunk-io/cursor-plugin) | +| Application | Supported | Guide | Plugin | +| --------------------------------------------------------------------------------------------------- | --------- | ------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------- | +| [Cursor](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/cursor-ide) | Yes | [Setup guide](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/cursor-ide) | [Cursor plugin](https://github.com/trunk-io/cursor-plugin) | | [Claude Code](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/claude-code-cli) | Yes | [Setup guide](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/claude-code-cli) | [Claude Code plugin](https://github.com/trunk-io/claude-code-plugin) | -| [GitHub Copilot](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/github-copilot-ide) | Yes | [Setup guide](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/github-copilot-ide) | | -| [Gemini CLI](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/gemini-cli) | Yes | [Setup guide](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/gemini-cli) | | +| [GitHub Copilot](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/github-copilot-ide) | Yes | [Setup guide](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/github-copilot-ide) | | +| [Gemini CLI](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/gemini-cli) | Yes | [Setup guide](https://docs.trunk.io/flaky-tests/use-mcp-server/configuration/gemini-cli) | | > [!NOTE] > Gemini Code Assist and Windsurf are not supported due to their limited support for MCP servers. @@ -82,8 +82,8 @@ Add to `~/.gemini/settings.json`: The MCP server is available at `https://mcp.trunk.io/mcp` and exposes the following tools: -| Tool | Description | -| ----------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------- | +| Tool | Description | +| ---------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------- | | [`fix-flaky-test`](https://docs.trunk.io/flaky-tests/use-mcp-server/mcp-tool-reference/get-root-cause-analysis) | Retrieve root cause analysis and fix suggestions for flaky tests | | [`setup-trunk-uploads`](https://docs.trunk.io/flaky-tests/use-mcp-server/mcp-tool-reference/set-up-test-uploads) | Generate a setup plan to upload test results to Trunk | diff --git a/trunk/app.json b/trunk/app.json index 76c7c1e9..e2d47f94 100644 --- a/trunk/app.json +++ b/trunk/app.json @@ -13,7 +13,14 @@ "metadata": { "categories": ["Development", "Monitoring"], "official": true, - "tags": ["trunk", "flaky-tests", "test-analysis", "root-cause-analysis", "code-quality", "development"], + "tags": [ + "trunk", + "flaky-tests", + "test-analysis", + "root-cause-analysis", + "code-quality", + "development" + ], "short_description": "Flaky test detection, root cause analysis, and fix suggestions for development teams", "mesh_description": "Provides flaky test detection, root cause analysis, and fix suggestions for development teams. Trunk MCP helps engineering teams identify and resolve unreliable tests, reducing CI/CD pipeline failures and improving code quality. It analyzes test patterns, identifies flakiness root causes, and suggests targeted fixes. Perfect for teams dealing with intermittent test failures and seeking to improve their testing infrastructure." } diff --git a/twelvelabs/README.md b/twelvelabs/README.md index 4244b213..be3c6fab 100644 --- a/twelvelabs/README.md +++ b/twelvelabs/README.md @@ -7,6 +7,7 @@ ### Purpose This MCP server allows client applications to: + - Index and search video content using natural language queries - Generate embeddings and analyze video at a semantic level - Extract insights, summaries, and structured data from video files diff --git a/veo/app.json b/veo/app.json index f3a85441..81f9f2fe 100644 --- a/veo/app.json +++ b/veo/app.json @@ -15,7 +15,15 @@ "metadata": { "categories": ["AI"], "official": false, - "tags": ["video-generation", "veo", "gemini", "ai", "text-to-video", "image-to-video", "google"], + "tags": [ + "video-generation", + "veo", + "gemini", + "ai", + "text-to-video", + "image-to-video", + "google" + ], "short_description": "Generate videos using Google Veo 3 and 3.1 models", "mesh_description": "The Veo MCP provides AI-powered video generation using Google Gemini Veo 3 and Veo 3.1 models. This MCP enables AI agents to generate videos from text prompts, use reference images to guide generation, and extend existing videos. **Key Features** - Text-to-video generation with detailed prompts. **Image-to-Video** - Use reference images to guide video generation. **Video Extension** - Extend existing videos with new prompts. **Multiple Models** - Support for Veo 2.0, 3.0, and 3.1 models with different capabilities. **Aspect Ratios** - Support for 16:9 and 9:16 output dimensions. **Object Storage** - Automatic video storage via object storage binding." } diff --git a/veo/package.json b/veo/package.json index c8be152f..baf39c25 100644 --- a/veo/package.json +++ b/veo/package.json @@ -1,8 +1,8 @@ { "name": "veo", "version": "1.0.0", - "description": "Veo 3 MCP", "private": true, + "description": "Veo 3 MCP", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/veo/server/tools/veo.ts b/veo/server/tools/veo.ts index b4620d4d..581cfe91 100644 --- a/veo/server/tools/veo.ts +++ b/veo/server/tools/veo.ts @@ -212,7 +212,9 @@ const createGetGeneratedVideoTool = (env: Env) => const videoStream = await client.downloadVideo(video.video.uri); const videoBlob = new Blob( [await new Response(videoStream).arrayBuffer()], - { type: mimeType }, + { + type: mimeType, + }, ); const objectStorage = getObjectStorage(env); diff --git a/veo/tsconfig.json b/veo/tsconfig.json index 615c8643..d1f4a7bc 100644 --- a/veo/tsconfig.json +++ b/veo/tsconfig.json @@ -2,9 +2,7 @@ "compilerOptions": { "target": "ES2022", "useDefineForClassFields": true, - "lib": [ - "ES2023" - ], + "lib": ["ES2023"], "module": "ESNext", "skipLibCheck": true, /* Bundler mode */ @@ -24,16 +22,10 @@ /* Path Aliases */ "baseUrl": ".", "paths": { - "server/*": [ - "./server/*" - ] + "server/*": ["./server/*"] }, /* Types */ - "types": [ - "@types/node" - ] + "types": ["@types/node"] }, - "include": [ - "server" - ] + "include": ["server"] } diff --git a/vercel-official/README.md b/vercel-official/README.md index c1d7bc6f..716d73d9 100644 --- a/vercel-official/README.md +++ b/vercel-official/README.md @@ -39,5 +39,4 @@ https://mcp.vercel.com/ --- -*This MCP requires an active Vercel account to function.* - +_This MCP requires an active Vercel account to function._ diff --git a/vercel-official/app.json b/vercel-official/app.json index 4728433e..23b97f64 100644 --- a/vercel-official/app.json +++ b/vercel-official/app.json @@ -13,7 +13,17 @@ "categories": ["Software Development"], "official": true, "mesh_unlisted": true, - "tags": ["deployment", "frontend", "nextjs", "serverless", "vercel", "jamstack", "cdn", "edge-functions", "ci-cd"], + "tags": [ + "deployment", + "frontend", + "nextjs", + "serverless", + "vercel", + "jamstack", + "cdn", + "edge-functions", + "ci-cd" + ], "short_description": "Deploy and manage your frontend applications with Vercel", "mesh_description": "Vercel is the platform for frontend developers, providing zero-configuration deployments for modern web applications with automatic HTTPS, CDN, and serverless functions. This official MCP enables you to deploy Next.js, React, Vue, Angular, Svelte, and static sites with git integration for automatic deployments on every push. Experience instant rollbacks, preview deployments for every pull request, and production deployments on merge. Manage custom domains with automatic SSL certificate provisioning, DNS configuration, and domain redirects. Configure environment variables per deployment environment (production, preview, development) with encrypted secrets management. Deploy Edge Functions that run on Vercel's global edge network for ultra-low latency, and Serverless Functions for backend API routes with automatic scaling. Access real-time Web Analytics with Core Web Vitals tracking, page performance metrics, and visitor insights without impacting performance. Use Edge Config for ultra-low-latency data access at the edge, and KV for serverless Redis-compatible storage. Configure build settings including framework presets, custom build commands, and output directories. Set up team collaboration with role-based access control, audit logs, and deployment protection rules. Integrate with monitoring tools, implement A/B testing with Edge Middleware, and optimize images automatically with Vercel Image Optimization. Perfect for teams building high-performance web applications." } diff --git a/virtual-try-on/README.md b/virtual-try-on/README.md index fec84a25..f8a662c3 100644 --- a/virtual-try-on/README.md +++ b/virtual-try-on/README.md @@ -9,8 +9,8 @@ and delegates generation to **nanobanana**, returning a generated image URL. ## Configuration (State) -| Field | Type | Description | -|---|---|---| +| Field | Type | Description | +| ------------ | -------------------------- | --------------------------- | | `NANOBANANA` | `@deco/nanobanana` binding | Nanobanana image generator. | Simply connect a nanobanana MCP through the Mesh UI. diff --git a/virtual-try-on/app.json b/virtual-try-on/app.json index 8c076c5d..c7526bc9 100644 --- a/virtual-try-on/app.json +++ b/virtual-try-on/app.json @@ -12,9 +12,16 @@ "metadata": { "categories": ["AI", "Productivity"], "official": false, - "tags": ["virtual-try-on", "fashion", "clothing", "image-generation", "ai", "e-commerce", "garment"], + "tags": [ + "virtual-try-on", + "fashion", + "clothing", + "image-generation", + "ai", + "e-commerce", + "garment" + ], "short_description": "Generate virtual try-on images combining person photos with garment images.", "mesh_description": "The Virtual Try-On MCP enables AI-powered virtual clothing try-on by combining a person's photo with garment images. This MCP delegates image generation to a configured image generator (like nanobanana) and returns realistic composites showing how the clothes would look on the person. **Key Features** - Combine person photos with one or more garment images. **Garment Types** - Support for tops, bottoms, dresses, outerwear, shoes, and accessories. **Face Preservation** - Maintains the person's identity and facial features. **Background Preservation** - Keeps the original background intact. **Flexible Aspect Ratios** - Support for various output dimensions. Perfect for e-commerce, fashion apps, and virtual fitting rooms. Requires connection to an image generator MCP." } } - diff --git a/virtual-try-on/package.json b/virtual-try-on/package.json index e4f87a29..a47e5bde 100644 --- a/virtual-try-on/package.json +++ b/virtual-try-on/package.json @@ -1,8 +1,8 @@ { "name": "virtual-try-on", "version": "1.0.0", - "description": "Virtual try-on MCP: compose person photo + garment images and delegate to an image generator (e.g., nanobanana).", "private": true, + "description": "Virtual try-on MCP: compose person photo + garment images and delegate to an image generator (e.g., nanobanana).", "type": "module", "scripts": { "dev": "bun run --hot server/dev-server.ts", @@ -27,5 +27,3 @@ "node": ">=22.0.0" } } - - diff --git a/virtual-try-on/tsconfig.json b/virtual-try-on/tsconfig.json index f3c0d4c1..38114de9 100644 --- a/virtual-try-on/tsconfig.json +++ b/virtual-try-on/tsconfig.json @@ -2,9 +2,7 @@ "compilerOptions": { "target": "ES2022", "useDefineForClassFields": true, - "lib": [ - "ES2023" - ], + "lib": ["ES2023"], "module": "ESNext", "skipLibCheck": true, "moduleResolution": "bundler", @@ -21,17 +19,9 @@ "noUncheckedSideEffectImports": true, "baseUrl": ".", "paths": { - "server/*": [ - "./server/*" - ] + "server/*": ["./server/*"] }, - "types": [ - "@types/node" - ] + "types": ["@types/node"] }, - "include": [ - "server" - ] + "include": ["server"] } - - diff --git a/vtex-docs/README.md b/vtex-docs/README.md index b9584fdd..c1ce6ab1 100644 --- a/vtex-docs/README.md +++ b/vtex-docs/README.md @@ -1,5 +1,5 @@ -# VTEX Docs MCP - +# VTEX Docs MCP + RAG-based MCP for searching VTEX documentation using hybrid search (semantic + full-text). ## Features @@ -55,12 +55,14 @@ bun run deploy Hybrid search (semantic + full-text) for documentation chunks. **Input:** + - `query` (string): Search query in natural language - `language` (optional): "en" or "pt-br" - `limit` (optional): Number of results (1-20, default: 8) - `semanticWeight` (optional): Weight for semantic vs full-text search (0-1, default: 0.3) **Output:** + - `results`: Array of matching documents with: - `content`: The chunk content - `title`: Document title diff --git a/vtex-docs/package.json b/vtex-docs/package.json index 10b61eab..1a89f846 100644 --- a/vtex-docs/package.json +++ b/vtex-docs/package.json @@ -1,8 +1,8 @@ { "name": "vtex-docs", "version": "1.0.0", - "description": "RAG-based MCP for VTEX documentation search", "private": true, + "description": "RAG-based MCP for VTEX documentation search", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/vtex-docs/tsconfig.json b/vtex-docs/tsconfig.json index c5b23929..392b6275 100644 --- a/vtex-docs/tsconfig.json +++ b/vtex-docs/tsconfig.json @@ -34,9 +34,5 @@ /* Types */ "types": ["@cloudflare/workers-types"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ] + "include": ["server", "shared", "vite.config.ts"] } diff --git a/vtex/package.json b/vtex/package.json index 21d403b9..48a4a1b8 100644 --- a/vtex/package.json +++ b/vtex/package.json @@ -1,8 +1,8 @@ { "name": "vtex", "version": "1.0.0", - "description": "MCP for VTEX Commerce APIs - Catalog, Orders, and Logistics/Inventory", "private": true, + "description": "MCP for VTEX Commerce APIs - Catalog, Orders, and Logistics/Inventory", "type": "module", "scripts": { "dev": "bun run --hot server/main.ts", diff --git a/vtex/server/main.ts b/vtex/server/main.ts index 72ea8e88..ef857f37 100644 --- a/vtex/server/main.ts +++ b/vtex/server/main.ts @@ -21,4 +21,10 @@ const runtime = withRuntime({ tools, }); -serve(runtime.fetch); +serve((req, ...args) => { + // Fast path for K8s readiness/liveness probes hitting GET / + if (req.method === "GET" && new URL(req.url).pathname === "/") { + return new Response("OK", { status: 200 }); + } + return runtime.fetch(req, ...args); +}); diff --git a/whatsapp-management/README.md b/whatsapp-management/README.md index cc69dcd2..8e731d1d 100644 --- a/whatsapp-management/README.md +++ b/whatsapp-management/README.md @@ -66,11 +66,13 @@ List all phone numbers for the business account. Creates a new phone number for the business account. **Input:** + - `countryCode` (string) - Country code (e.g., "1" for US) - `phoneNumber` (string) - Phone number - `verifiedName` (string) - Display name for the number **Output:** + - `id` (string) - The created phone number ID ### REQUEST_CODE_FOR_PHONE_NUMBER @@ -78,11 +80,13 @@ Creates a new phone number for the business account. Requests a verification code for a registered phone number. **Input:** + - `phoneNumberId` (string) - The phone number ID - `codeMethod` (optional, "SMS" | "VOICE") - Delivery method (default: "SMS") - `language` (optional, string) - Language code (default: "en_US") **Output:** + - `success` (boolean) - Whether the request was successful ### VERIFY_CODE_FOR_PHONE_NUMBER @@ -90,10 +94,12 @@ Requests a verification code for a registered phone number. Validates a verification code for a registered phone number. **Input:** + - `phoneNumberId` (string) - The phone number ID - `code` (string) - The verification code received **Output:** + - `success` (boolean) - Whether verification was successful ### REGISTER_PHONE_NUMBER @@ -101,10 +107,12 @@ Validates a verification code for a registered phone number. Registers a phone number for the business account. **Input:** + - `phoneNumberId` (string) - The phone number ID - `pin` (string) - 6-digit PIN for two-step verification **Output:** + - `success` (boolean) - Whether registration was successful ### UPDATE_PHONE_NUMBER_WEBHOOK @@ -112,6 +120,7 @@ Registers a phone number for the business account. Update the webhook configuration for a phone number. **Input:** + - `phoneNumberId` (string) - The phone number ID - `webhookUrl` (string) - The webhook URL - `verifyToken` (string) - Token for webhook verification diff --git a/whatsapp-management/package.json b/whatsapp-management/package.json index 9a08146b..2eabe373 100644 --- a/whatsapp-management/package.json +++ b/whatsapp-management/package.json @@ -1,8 +1,8 @@ { "name": "whatsapp-management", "version": "1.0.0", - "description": "WhatsApp Business Management MCP", "private": true, + "description": "WhatsApp Business Management MCP", "type": "module", "scripts": { "dev": "PORT=8003 bun run --hot server/main.ts", @@ -24,4 +24,4 @@ "engines": { "node": ">=22.0.0" } -} \ No newline at end of file +} diff --git a/whatsapp-management/tsconfig.json b/whatsapp-management/tsconfig.json index 717a515f..ebbc986b 100644 --- a/whatsapp-management/tsconfig.json +++ b/whatsapp-management/tsconfig.json @@ -27,14 +27,11 @@ "baseUrl": ".", "paths": { "shared/*": ["./shared/*"], - "server/*": ["./server/*"], + "server/*": ["./server/*"] }, /* Types */ "types": ["node"] }, - "include": [ - "server", - "shared" - ] + "include": ["server", "shared"] } diff --git a/whatsapp/README.md b/whatsapp/README.md index cc69dcd2..8e731d1d 100644 --- a/whatsapp/README.md +++ b/whatsapp/README.md @@ -66,11 +66,13 @@ List all phone numbers for the business account. Creates a new phone number for the business account. **Input:** + - `countryCode` (string) - Country code (e.g., "1" for US) - `phoneNumber` (string) - Phone number - `verifiedName` (string) - Display name for the number **Output:** + - `id` (string) - The created phone number ID ### REQUEST_CODE_FOR_PHONE_NUMBER @@ -78,11 +80,13 @@ Creates a new phone number for the business account. Requests a verification code for a registered phone number. **Input:** + - `phoneNumberId` (string) - The phone number ID - `codeMethod` (optional, "SMS" | "VOICE") - Delivery method (default: "SMS") - `language` (optional, string) - Language code (default: "en_US") **Output:** + - `success` (boolean) - Whether the request was successful ### VERIFY_CODE_FOR_PHONE_NUMBER @@ -90,10 +94,12 @@ Requests a verification code for a registered phone number. Validates a verification code for a registered phone number. **Input:** + - `phoneNumberId` (string) - The phone number ID - `code` (string) - The verification code received **Output:** + - `success` (boolean) - Whether verification was successful ### REGISTER_PHONE_NUMBER @@ -101,10 +107,12 @@ Validates a verification code for a registered phone number. Registers a phone number for the business account. **Input:** + - `phoneNumberId` (string) - The phone number ID - `pin` (string) - 6-digit PIN for two-step verification **Output:** + - `success` (boolean) - Whether registration was successful ### UPDATE_PHONE_NUMBER_WEBHOOK @@ -112,6 +120,7 @@ Registers a phone number for the business account. Update the webhook configuration for a phone number. **Input:** + - `phoneNumberId` (string) - The phone number ID - `webhookUrl` (string) - The webhook URL - `verifyToken` (string) - Token for webhook verification diff --git a/whatsapp/app.json b/whatsapp/app.json index af2fa76b..68cd6f04 100644 --- a/whatsapp/app.json +++ b/whatsapp/app.json @@ -12,7 +12,16 @@ "metadata": { "categories": ["Communication"], "official": true, - "tags": ["whatsapp", "messaging", "business", "meta", "chat", "automation", "webhook", "communication"], + "tags": [ + "whatsapp", + "messaging", + "business", + "meta", + "chat", + "automation", + "webhook", + "communication" + ], "short_description": "Talk to your Mesh Agents via WhatsApp" } } diff --git a/whatsapp/package.json b/whatsapp/package.json index 2167a26d..5ecb0694 100644 --- a/whatsapp/package.json +++ b/whatsapp/package.json @@ -1,8 +1,8 @@ { "name": "whatsappagent", "version": "1.0.0", - "description": "WhatsApp MCP", "private": true, + "description": "WhatsApp MCP", "type": "module", "scripts": { "dev": "bun run scripts/dev.ts", @@ -27,4 +27,4 @@ "engines": { "node": ">=22.0.0" } -} \ No newline at end of file +} diff --git a/whatsapp/server/llm.ts b/whatsapp/server/llm.ts index cb45d1c0..0df0cfc2 100644 --- a/whatsapp/server/llm.ts +++ b/whatsapp/server/llm.ts @@ -123,9 +123,7 @@ export async function generateResponseForEvent( if (!response.ok) { const errorText = await response.text(); throw new Error( - `Mesh API error (${response.status}): ${ - errorText || response.statusText - }`, + `Mesh API error (${response.status}): ${errorText || response.statusText}`, ); } diff --git a/whatsapp/tsconfig.json b/whatsapp/tsconfig.json index 717a515f..ebbc986b 100644 --- a/whatsapp/tsconfig.json +++ b/whatsapp/tsconfig.json @@ -27,14 +27,11 @@ "baseUrl": ".", "paths": { "shared/*": ["./shared/*"], - "server/*": ["./server/*"], + "server/*": ["./server/*"] }, /* Types */ "types": ["node"] }, - "include": [ - "server", - "shared" - ] + "include": ["server", "shared"] } diff --git a/whisper/README.md b/whisper/README.md index 3a67ca43..29a62ad5 100644 --- a/whisper/README.md +++ b/whisper/README.md @@ -113,7 +113,7 @@ Transcribes an audio file to text. ```typescript const result = await transcribeAudio({ - audioUrl: "https://example.com/audio.mp3" + audioUrl: "https://example.com/audio.mp3", }); console.log(result.text); @@ -124,7 +124,7 @@ console.log(result.text); ```typescript const result = await transcribeAudio({ audioUrl: "https://example.com/audio-pt.mp3", - language: "pt" + language: "pt", }); ``` @@ -133,16 +133,16 @@ const result = await transcribeAudio({ ```typescript const result = await transcribeAudio({ audioUrl: "https://example.com/audio.mp3", - timestampGranularities: ["word", "segment"] + timestampGranularities: ["word", "segment"], }); // Access word-level timestamps -result.words?.forEach(word => { +result.words?.forEach((word) => { console.log(`${word.word} (${word.start}s - ${word.end}s)`); }); // Access segment-level timestamps -result.segments?.forEach(segment => { +result.segments?.forEach((segment) => { console.log(`${segment.text} (${segment.start}s - ${segment.end}s)`); }); ``` @@ -153,7 +153,7 @@ result.segments?.forEach(segment => { const result = await transcribeAudio({ audioUrl: "https://example.com/technical-talk.mp3", prompt: "This is a technical presentation about machine learning and AI.", - language: "en" + language: "en", }); ``` @@ -222,6 +222,7 @@ shared/ (shared code) ### Error: "Cannot find module '@decocms/mcps-shared/audio-transcribers'" Run: + ```bash bun install ``` @@ -229,6 +230,7 @@ bun install ### Error: "OPENAI_API_KEY is not set" Configure the environment variable: + ```bash export OPENAI_API_KEY=your_key_here ``` @@ -268,4 +270,3 @@ bun run configure ## License MIT - diff --git a/whisper/package.json b/whisper/package.json index 37c6761b..b24bb977 100644 --- a/whisper/package.json +++ b/whisper/package.json @@ -1,8 +1,8 @@ { "name": "whisper", "version": "1.0.0", - "description": "Whisper MCP for transcribing audio files", "private": true, + "description": "Whisper MCP for transcribing audio files", "type": "module", "scripts": { "dev": "deco dev --vite", diff --git a/whisper/tsconfig.json b/whisper/tsconfig.json index c5b23929..392b6275 100644 --- a/whisper/tsconfig.json +++ b/whisper/tsconfig.json @@ -34,9 +34,5 @@ /* Types */ "types": ["@cloudflare/workers-types"] }, - "include": [ - "server", - "shared", - "vite.config.ts" - ] + "include": ["server", "shared", "vite.config.ts"] } diff --git a/whisper/wrangler.toml b/whisper/wrangler.toml index 61ec55d4..e8b7aef2 100644 --- a/whisper/wrangler.toml +++ b/whisper/wrangler.toml @@ -2,7 +2,7 @@ name = "whisper" main = "server/main.ts" compatibility_date = "2025-06-17" -compatibility_flags = [ "nodejs_compat" ] +compatibility_flags = ["nodejs_compat"] scope = "deco" [deco] @@ -25,4 +25,4 @@ body = "$0.01 per transcription" [[deco.bindings.contract.clauses]] id = "whisper-1:transcribeAudio" price = 0.01 -description = "$0.01 per audio transcription" \ No newline at end of file +description = "$0.01 per audio transcription" diff --git a/wix-official/README.md b/wix-official/README.md index 00017d3a..2686f490 100644 --- a/wix-official/README.md +++ b/wix-official/README.md @@ -39,5 +39,4 @@ https://mcp.wix.com/sse --- -*This MCP requires an active Wix account to function.* - +_This MCP requires an active Wix account to function._ diff --git a/wix-official/app.json b/wix-official/app.json index a1747d74..362c0461 100644 --- a/wix-official/app.json +++ b/wix-official/app.json @@ -13,7 +13,17 @@ "categories": ["CMS"], "official": true, "mesh_unlisted": false, - "tags": ["website-builder", "cms", "e-commerce", "no-code", "wix", "drag-and-drop", "templates", "blogging", "online-store"], + "tags": [ + "website-builder", + "cms", + "e-commerce", + "no-code", + "wix", + "drag-and-drop", + "templates", + "blogging", + "online-store" + ], "short_description": "Build and manage websites with Wix's powerful no-code platform", "mesh_description": "Wix is a leading website builder platform with over 200 million users worldwide, offering powerful tools for creating professional websites without coding. This official MCP provides natural language access to Wix's comprehensive features including creating and editing pages with drag-and-drop functionality, managing site structure and navigation, and customizing designs with thousands of templates. Build complete e-commerce stores with product catalogs, inventory management, shopping cart, checkout process, and payment gateway integrations including PayPal, Stripe, and Wix Payments. Create and manage blog content with SEO optimization, scheduling, categories, and social media integration. Use Wix's CMS capabilities to create custom databases and dynamic pages for portfolios, directories, and content-rich sites. Add interactive elements like forms, galleries, video backgrounds, animations, and custom interactions without coding. Configure SEO settings including meta tags, structured data, sitemaps, and mobile optimization for better search rankings. Manage domain names, SSL certificates, and email accounts directly from the platform. Use Wix Apps marketplace to extend functionality with thousands of integrations for marketing, analytics, bookings, events, and more. Create member areas with user authentication, roles, and permissions. Implement marketing tools including email campaigns, social media integration, and marketing automation. Access analytics and insights on visitor behavior, conversion rates, and site performance. Perfect for small businesses, entrepreneurs, and creatives building their online presence." }