Skip to content

feat(mcp): OAuth2 PKCE Authorization Server for HTTP transport#783

Closed
shigechika wants to merge 46 commits intogoogleworkspace:mainfrom
shigechika:feat/mcp-http-phase2-oauth
Closed

feat(mcp): OAuth2 PKCE Authorization Server for HTTP transport#783
shigechika wants to merge 46 commits intogoogleworkspace:mainfrom
shigechika:feat/mcp-http-phase2-oauth

Conversation

@shigechika
Copy link
Copy Markdown

Summary

  • Implements MCP Authorization spec 2025-11-25 on the Streamable HTTP transport
  • Enable with gws mcp --transport http --auth; Phase 1 (no --auth) is unchanged
  • New OAuth2 AS endpoints: RFC 9728 protected-resource metadata, RFC 8414 AS metadata, DCR stub, PKCE S256 authorize/callback/token flow via Google OAuth2
  • Bearer auth axum middleware: all /mcp requests require Authorization: Bearer <token>; unauthenticated requests get 401 with WWW-Authenticate pointing to AS metadata
  • Session TTL 8 h; pending auth TTL 10 min (in-memory)
  • RFC 7636 §4.6 PKCE S256 unit tests added
  • FORK.md updated with --auth usage, prerequisites, and endpoint table

Endpoints added

Endpoint RFC
/.well-known/oauth-protected-resource RFC 9728
/.well-known/oauth-authorization-server RFC 8414
/oauth/register RFC 7591 stub
/oauth/authorize RFC 6749
/oauth/callback
/oauth/token RFC 6749 + RFC 7636 PKCE S256

Prerequisites for --auth

  1. gws auth setup — creates client_secret.json with a Google OAuth2 web app credential
  2. Add http://localhost:<port>/oauth/callback as an Authorized redirect URI in Google Cloud Console

Test plan

  • cargo clippy -- -D warnings — clean
  • cargo test — 826 passed
  • PKCE S256 unit tests (RFC 7636 §4.6 worked example + wrong-verifier rejection)
  • Smoke test: Phase 1 (--transport http no --auth) — /mcp returns 406 (correct), /.well-known/oauth-protected-resource returns 404 (correct, routes not registered)
  • Smoke test: --auth without client_secret.json exits with clear error message

Note (Phase 2/3): GWS API calls still use the shared gws auth login token. Bearer tokens identify the user by email but do not yet carry per-user GWS credentials. Per-user token isolation is Phase 4.

🤖 Generated with Claude Code

shigechika and others added 30 commits March 26, 2026 08:00
upstream v0.22.1をベースにMCPサーバーを載せ直し。
- mcp_server.rsをcargo workspace構造に移動
- mail_builder APIに適合(自前MessageBuilder廃止)
- execute_methodのUploadSource対応
- Gmail helper関数のpub(crate)化
- 不要ワークフロー削除、sync-upstream追加
- FORK.md/FORK.ja.md/CLAUDE.md配置
upstream 6コミットを取り込み。コンフリクトなし。
… sheets --range

Upstream changes (11 commits, v0.22.3 → v0.22.5):
- refactor: drop serde_yaml, migrate registry to toml
- ci: add cargo-audit and cargo-deny workflows
- feat(sheets): add --range flag to +append
- fix: npm installer improvements (native fetch, SHA256 verify)
- ci: pin cross-rs to v0.2.5

Fork actions:
- Resolved release.yml modify/delete conflict (deleted in fork)
- sync-upstream.yml: changed to always-PR mode (fixes workflows permission issue)
- MCP server and helper tools unaffected
MCP server was added on 2026-03-04 and removed just 2 days later
on 2026-03-06. Updated both English and Japanese versions.
- Add mcp_compose_reply() façade in gmail/mod.rs (Fork-only MCP bridge block)
  to expose reply logic without changing upstream pub(super) visibility
- Extract send_raw_gmail() shared function for Gmail send/reply/forward
- Refactor handle_gmail_send() to use send_raw_gmail()
- Add handle_gmail_reply() with threading headers, reply-to resolution,
  and reply-all support with self-dedup
- Improve helper dispatch to match expression for extensibility
- Add get_required_str/get_optional_str utilities
- Add 12 new tests (29 total MCP tests passing)
- AGENTS.md: add MCP Helper Tools section with façade pattern explanation,
  new helper addition procedure, and upstream merge checklist
- FORK.md / FORK.ja.md: add gmail_reply to helper tools table
Add build.rs to capture git describe at compile time. Version output
now shows: gws 0.22.5 (fork/v0.22.5-mcp.1)

This helps identify which fork revision is installed across machines.
build.rs is fork-only and does not exist in upstream.
Previous v0.22.5 merge (cc08dbc) lost its second parent during some
rebase/squash operation, causing git to see upstream/main as unmerged
even though all v0.22.5 content is already in origin/main. This
empty -s ours merge restores the history link without changing files.
When upstream modifies a workflow file that the fork has deleted
(e.g. release.yml), the sync workflow now auto-removes it instead
of creating an unresolvable PR. Remaining real conflicts still
trigger a manual-resolution PR.

Files auto-resolved: release.yml, automation.yml, cla.yml, coverage.yml,
generate-skills.yml, publish-skills.yml, release-changesets.yml,
stale.yml, dist-workspace.toml.
GitHub repo renamed to shigechika/gws-mcp to reflect its actual
purpose (MCP-only use). Binary name and OS keyring service name
unchanged.
…oogleworkspace#170, googleworkspace#260

Address bug reports and feature requests that targeted upstream's MCP
server and were auto-closed when MCP was removed:

- googleworkspace#162: use configured service alias (not Discovery doc name) as the
  tool-name prefix in walk_resources, so tools/list and tools/call
  share one namespace for aliased services like `events`.
- googleworkspace#170: replace split('_') tool-name parsing with resolve_tool_path,
  a greedy Discovery-tree resolver that handles resource names
  containing underscores (e.g. admin_role_assignments_list) and
  arbitrarily nested resources.
- googleworkspace#260: attach readOnlyHint/destructiveHint/idempotentHint annotations
  derived from HTTP method to every generated tool and to the
  gmail_send / gmail_reply helper tools.

Add unit tests for http_method_annotations and resolve_tool_path
covering simple, multi-word, nested, and not-found cases. Document
the ported issues (plus previously-addressed googleworkspace#212 and googleworkspace#251) in
FORK.md and FORK.ja.md.
…e#642)

parse_message_headers used exact-case string matching, so headers like
"CC" (common from Exchange/Outlook) or "from" (lowercase from some MTAs)
would fall through and be silently dropped. Per RFC 5322 §1.2.2 header
field names are case-insensitive, and the sibling get_part_header
already uses eq_ignore_ascii_case.

Normalize the header name to lowercase before matching. Add a
regression test covering mixed-case FROM/to/CC/subject/MESSAGE-ID/
references. Document in FORK.md.
Name-constraint handling for URI names and wildcard certificates was
incorrectly permissive in 0.103.10 (advisory 2026-04-14). Patch-level
bump, no API changes.
…oogleworkspace#717 as addressed (#1)

These upstream issues are already resolved in this fork's code but
weren't listed in the "issues addressed" table:

- googleworkspace#573 metadataHeaders repeated expansion: Discovery parser preserves
  `repeated: true` and executor expands arrays into repeated query
  parameters. Affects Discovery-driven MCP tools too.
- googleworkspace#625 script service registry: `ServiceEntry` for `script/v1` is
  registered so `gws script ...` and MCP `script_*` tools resolve.
- googleworkspace#717 auth status stdout cleanliness: keyring backend diagnostic goes
  to stderr via `eprintln!`, so jq pipelines work.

Docs-only change; no code touched. Listing them here makes the fork's
coverage visible to visitors and lets the cross-reference from this
commit land in upstream issue timelines.
* chore(clippy): collapse nested if in script.rs match arm

Satisfies `clippy::collapsible_match` which is enforced via
`cargo clippy --workspace -- -D warnings` in the Lint job.
No behavior change: falling through the `"json"` guard lands on the
existing `_ => return Ok(None)` arm, matching the previous else branch.

* docs(script): note the fallthrough in the appsscript match guard

Add a one-line comment clarifying that other .json files intentionally
fall through to the catch-all arm. Makes the match guard easier to read
for anyone less familiar with Rust fallthrough semantics.
…oogleworkspace#562) (#2)

* fix(auth): stop auto-injecting cloud-platform scope in TUI (upstream googleworkspace#562)

The discovery-based scope picker unconditionally appended the
cloud-platform scope after the user's selection. Some Workspace
organizations block cloud-platform via admin policy, so the injection
caused `admin_policy_enforced` login failures for users who picked
narrower, permitted scopes.

Remove the auto-inject. Users who need cloud-platform (e.g. for the
modelarmor helper) can either tick it in the picker or pass
`--full` / `--scopes https://www.googleapis.com/auth/cloud-platform`.

Also drop the now-unused `PLATFORM_SCOPE` constant from setup.rs and
document the fix in the FORK.md / FORK.ja.md issue-coverage table.

* chore: add changeset for upstream googleworkspace#562 auth TUI fix

Required by the Policy Check workflow whenever .rs or Cargo.* files change.

* test(auth): invariant tests for scope preset cloud-platform handling

Guards against regression of upstream googleworkspace#562 at the scope-list layer:

- `default_scopes_does_not_include_cloud_platform` ensures the default
  login path never silently requests cloud-platform
- `full_scopes_includes_cloud_platform` confirms the opt-in path is
  preserved so modelarmor users have a stable entry point

Also note the behavior change for modelarmor users in the changeset.
…eworkspace#644) (#4)

* fix(gmail): use OIDC userinfo for display name lookup (upstream googleworkspace#644)

`gmail +send` printed "grant the profile scope" and sent messages with
a null From display name even when `userinfo.profile` was granted. The
root cause was the People API call (`/people/me?personFields=names`)
returning 403 for some personal Gmail accounts that have the scope,
which the helper treated as "scope missing".

Switch to the OIDC userinfo endpoint
(`https://openidconnect.googleapis.com/v1/userinfo`), which accepts the
same `userinfo.profile` scope and responds uniformly across Workspace
and personal accounts. The response is a flat object with a top-level
`name` string, so `parse_profile_display_name` is simplified accordingly.

Also reword the 401/403 fallback: it used to diagnose the failure as a
missing scope, which was misleading when the scope was already granted.
The new message points at `gws auth status` / `gws auth login` and
frames the refresh path without asserting what went wrong.

Tests updated to use the OIDC userinfo response shape.

* style: cargo fmt --all on gmail helper

CI uses `cargo fmt --all --check`; local `cargo fmt` missed two spots
in the resolve_sender match guard and the fetch_profile_display_name
json parse chain. No behavior change.
- Add .github/workflows/release.yml: builds macOS/Linux binaries on
  fork/v* tag push and creates a GitHub Release. Supports
  workflow_dispatch for manually triggering a release from an existing
  tag.
- Auto-updates shigechika/homebrew-tap Formula/gws-mcp.rb with correct
  SHA256 values after each release (requires HOMEBREW_TAP_TOKEN secret).
- Update FORK.md / FORK.ja.md: add Homebrew as the recommended install
  method (no Rust toolchain required).
shigechika and others added 16 commits April 28, 2026 17:23
Rename release.yml → release-fork.yml so GitHub assigns a new workflow
ID with fresh trigger metadata (the old ID was frozen from the upstream
file that had no workflow_dispatch or fork/v* tag triggers).

Also fix two bugs in the same pass:
- Use GH_PAT instead of HOMEBREW_TAP_TOKEN (actual secret name)
- Use github.event.inputs.tag_name instead of inputs.tag_name
The --notes inline string had unindented lines (col 0) which caused
YAML to terminate the run: | block scalar early, preventing GitHub
from parsing the workflow name and triggers.

Switch to --notes-file with a heredoc so all block scalar lines stay
at the 10-space indentation level required by YAML.
GitHub Actions does not allow secrets context in step if: conditions.
Use a shell guard ([[ -z $GH_TOKEN ]]) instead.
GH_PAT is for auto-tag push only; HOMEBREW_TAP_TOKEN is the dedicated
secret with write access to shigechika/homebrew-tap.
* fix(auth): map meet service to meetings scope prefix

gws auth login -s meet silently requested zero Meet scopes because
the Meet API exposes scopes under the meetings.* prefix, not meet.*.
Add the mapping so the existing dynamic-augmentation path picks up
all Meet scopes from the Discovery document.

Mirrors upstream PR googleworkspace#754.

* chore: add changeset for meet scope fix
* fix(gmail): accept unpadded base64url from Gmail API (upstream googleworkspace#774)

Gmail API returns base64url data without `=` padding in attachment
and message body responses. Switch decoders to URL_SAFE_NO_PAD with
`.trim_end_matches('=')` so both padded and unpadded input are accepted.

* fix(gmail): use Indifferent padding decoder for base64url (upstream googleworkspace#774)

Replace URL_SAFE_NO_PAD + trim with a URL_SAFE_LENIENT const engine using
DecodePaddingMode::Indifferent, which accepts both padded and unpadded input
without preprocessing. Add changeset file for release tracking.

* fix(gmail): simplify URL_SAFE_LENIENT — drop encode_padding, trim comment to one line
* ci(release): add Windows zip, .deb, and .rpm to release artifacts

- Add x86_64-pc-windows-msvc to release build matrix; package as .zip
- Generate .deb (cargo-deb) and .rpm (cargo-generate-rpm) for Linux targets
- Add [package.metadata.deb] and [package.metadata.generate-rpm] to Cargo.toml
- Update release notes to document new install methods

* ci(release): fix generate-rpm package name, drop --locked, fix release note placeholders

- cargo generate-rpm: -p crates/google-workspace-cli → -p google-workspace-cli (package name, not path)
- Remove --locked from cargo install to avoid Cargo.lock absence errors
- Release notes: VERSION → <VERSION> to clarify placeholders

* fix(release): pin cargo-deb/generate-rpm versions and add RPM ca-certificates dependency

- Pin cargo-deb to 3.7.0 and cargo-generate-rpm to 0.21.0 for reproducible builds
- Add requires = { ca-certificates = "*" } to [package.metadata.generate-rpm]
  to match the `depends = "ca-certificates"` already present in .deb metadata
* fix(release): use --manifest-path for cargo generate-rpm in workspace

cargo generate-rpm -p resolves the path as package-name/Cargo.toml
which fails in a workspace where the crate lives under crates/.
Switch to --manifest-path for explicit resolution.

* ci(release): bump actions to Node.js 24-compatible versions

- actions/checkout: v4 → v6.0.2
- mozilla-actions/sccache-action: v0.0.7 → v0.0.10
- Swatinem/rust-cache: v2 → v2.9.1
- actions/upload-artifact: v4 → v7.0.1
- actions/download-artifact: v4 → v8.0.1
…#11)

* fix(test): prevent race in encrypted-credentials tests; docs: add deb/rpm/Windows install

- test_load_credentials_encrypted_file: replace raw set_var with EnvVarGuard so
  GOOGLE_WORKSPACE_CLI_CONFIG_DIR is restored before the tempdir is deleted
- test_load_credentials_encrypted_takes_priority_over_default: add #[serial] and
  per-test config-dir isolation to eliminate the key-file creation race
- FORK.md: add Debian .deb, RHEL .rpm, Windows .zip, and direct-download sections
  to the Installation chapter

* chore: add changeset for test race fix and install docs

* docs: fix deb filename placeholder to include revision suffix (-1)
…pm (#12)

--manifest-path is unsupported in cargo-generate-rpm 0.21.0.
-p treats its argument as a directory path, so -p crates/google-workspace-cli
resolves correctly to crates/google-workspace-cli/Cargo.toml.
#13)

* ci(release): use friendly asset names instead of raw Rust target triples

Add asset_name to build matrix:
  aarch64-apple-darwin    -> macos-arm64
  x86_64-apple-darwin     -> macos-amd64
  x86_64-unknown-linux-gnu -> linux-amd64
  aarch64-unknown-linux-gnu -> linux-arm64
  x86_64-pc-windows-msvc  -> windows-amd64

Update Homebrew formula URLs and FORK.md install commands to match.

* ci(release): include version in asset filenames

Format: gws-mcp-<VERSION>-<platform>.ext
e.g. gws-mcp-0.22.5-mcp.9-linux-amd64.tar.gz

Version is derived from the tag at build time via GITHUB_ENV.
Update Homebrew formula URLs and FORK.md accordingly.
* feat(mcp): add Streamable HTTP transport via rmcp (Phase 1, no auth)

* chore: update Cargo.lock with axum and rmcp dependencies

* fix(mcp): bind to 127.0.0.1 by default; add --bind flag and tool deserialization warning
Implements MCP Authorization spec 2025-11-25 on the HTTP transport.
Enable with `--auth` flag alongside `--transport http`.

- RFC 9728 /.well-known/oauth-protected-resource
- RFC 8414 /.well-known/oauth-authorization-server
- DCR stub (/oauth/register)
- PKCE S256 authorization flow via Google OAuth2 (/oauth/authorize, /oauth/callback)
- Token endpoint with PKCE S256 verification (/oauth/token)
- Bearer auth middleware: all /mcp requests require Authorization: Bearer
- Session TTL: 8 hours; pending auth: 10 minutes
- --auth flag preserved as opt-in; Phase 1 (no auth) unchanged
- RFC 7636 §4.6 PKCE unit tests
@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented May 9, 2026

🦋 Changeset detected

Latest commit: b37314b

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
@googleworkspace/cli Minor

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@shigechika
Copy link
Copy Markdown
Author

Opened by mistake — this belongs in the fork. Closing immediately.

@googleworkspace-bot googleworkspace-bot added area: auth area: mcp area: distribution area: docs area: core Core CLI parsing, commands, error handling, utilities labels May 9, 2026
@shigechika shigechika closed this May 9, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the MCP server capabilities by introducing a secure HTTP transport layer with OAuth2 PKCE authentication. It also expands the utility of the MCP server through new helper tools that simplify common Gmail operations, while resolving several upstream-reported bugs related to authentication, scope management, and API data parsing.

Highlights

  • MCP Authorization Server: Implemented an OAuth2 PKCE Authorization Server for the HTTP transport, supporting RFC 9728 and RFC 8414 metadata, and PKCE S256 flows.
  • HTTP Transport Auth: Added bearer token authentication for all /mcp requests when using the HTTP transport, with session management and 401 challenge responses.
  • Helper Tools: Introduced convenience helper tools like gmail_send and gmail_reply to automate complex API interactions like RFC 2822 formatting.
  • Bug Fixes & Improvements: Addressed multiple issues including Gmail display-name lookup, unpadded base64url decoding, and TUI scope-picker policy conflicts.
Ignored Files
  • Ignored by pattern: .github/workflows/** (12)
    • .github/workflows/auto-tag.yml
    • .github/workflows/automation.yml
    • .github/workflows/ci.yml
    • .github/workflows/cla.yml
    • .github/workflows/coverage.yml
    • .github/workflows/generate-skills.yml
    • .github/workflows/publish-skills.yml
    • .github/workflows/release-changesets.yml
    • .github/workflows/release-fork.yml
    • .github/workflows/release.yml
    • .github/workflows/stale.yml
    • .github/workflows/sync-upstream.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize the Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counterproductive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Generative AI Prohibited Use Policy, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@google-cla
Copy link
Copy Markdown

google-cla Bot commented May 9, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds Model Context Protocol (MCP) server support to the Google Workspace CLI, implementing both stdio and HTTP transports along with an OAuth2 PKCE authorization server. It introduces AI-friendly helper tools for Gmail and resolves several upstream issues regarding scope mapping and base64 decoding. The review feedback highlights critical security and stability concerns, specifically an open redirect vulnerability in the authorization endpoint, a potential memory leak from unpruned sessions, and the need to sanitize terminal output to prevent escape sequence injection.

Comment on lines +343 to +346
async fn oauth_authorize(
State(state): State<AppState>,
Query(p): Query<AuthorizeParams>,
) -> Result<Redirect, (StatusCode, String)> {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The oauth_authorize endpoint does not validate the redirect_uri parameter against an allowlist or registered client metadata. This creates an Open Redirect vulnerability where an attacker can trick a user into logging in and then redirect them to a malicious site, potentially stealing the authorization code. Per the MCP Authorization spec (2025-11-25), the authorization server MUST validate the redirection URI. For a local MCP server, you should at least restrict redirects to loopback addresses.

async fn oauth_authorize(
    State(state): State<AppState>,
    Query(p): Query<AuthorizeParams>,
) -> Result<Redirect, (StatusCode, String)> {
    // Validate redirect_uri to prevent Open Redirect vulnerabilities.
    // For a local MCP server, we restrict redirects to loopback addresses.
    if !p.redirect_uri.starts_with("http://localhost") 
        && !p.redirect_uri.starts_with("http://127.0.0.1") 
    {
        return Err((
            StatusCode::BAD_REQUEST,
            "invalid_redirect_uri: only loopback addresses are allowed".to_string(),
        ));
    }

Comment on lines +255 to +268
let sessions = state.auth_store.sessions.lock().await;
match sessions.get(&token) {
Some(s) if !is_expired(s.created_at, SESSION_TTL) => {}
_ => {
return (
StatusCode::UNAUTHORIZED,
[(
"WWW-Authenticate",
"Bearer realm=\"gws-mcp\", error=\"invalid_token\", resource_metadata=\"/.well-known/oauth-protected-resource\"",
)],
)
.into_response();
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The sessions map in AuthStore is never pruned, leading to a memory leak as new sessions are created over time. While bearer_auth_middleware checks for expiration, it does not remove expired entries. You should remove the session from the map if it is found to be expired to prevent memory exhaustion.

    let mut sessions = state.auth_store.sessions.lock().await;
    let is_valid = match sessions.get(&token) {
        Some(s) => !is_expired(s.created_at, SESSION_TTL),
        None => false,
    };
    if !is_valid {
        sessions.remove(&token);
        return (
            StatusCode::UNAUTHORIZED,
            [(
                "WWW-Authenticate",
                "Bearer realm=\"gws-mcp\", error=\"invalid_token\", resource_metadata=\"/.well-known/oauth-protected-resource\"",
            )],
        )
            .into_response();
    }
References
  1. Avoid introducing changes that are outside the primary goal of a pull request to prevent scope creep.

Comment on lines +1107 to +1109
eprintln!(
"[gws mcp] Warning: Authentication failed, proceeding without credentials: {e}"
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The error message e is printed directly to the terminal without sanitization. Since GwsError can contain strings returned from external Google APIs, this could allow for escape sequence injection. Please use crate::output::sanitize_for_terminal before printing.

            eprintln!(
                "[gws mcp] Warning: Authentication failed, proceeding without credentials: {}",
                crate::output::sanitize_for_terminal(&e.to_string())
            );

serde_json::from_value(v.clone())
.map_err(|e| {
let name = v.get("name").and_then(|n| n.as_str()).unwrap_or("?");
eprintln!("[gws mcp] Warning: skipping tool '{name}': {e}");
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The tool name and error message are printed to the terminal without sanitization. Since these can contain data from external Discovery documents, they should be sanitized to prevent escape sequence injection.

                            eprintln!(
                                "[gws mcp] Warning: skipping tool '{}': {}",
                                crate::output::sanitize_for_terminal(name),
                                crate::output::sanitize_for_terminal(&e.to_string())
                            );

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area: auth area: core Core CLI parsing, commands, error handling, utilities area: distribution area: docs area: mcp

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants