Get GPT-5.4 second opinions directly from your Claude Code terminal
Cross-check your Claude Code work with OpenAI's GPT-5.4 at maximum reasoning depth. Three modes for different workflows — from quick inline questions to full autonomous code review and implementation.
Install | How It Works | Modes | Prompting Guide | Contributing
You're deep in a Claude Code session. Something feels off — maybe the architecture, maybe a subtle bug. You want a second pair of eyes, but from a model that thinks differently.
This skill lets you ask GPT-5.4 without leaving your terminal. Claude curates the context, Codex analyzes it with xhigh reasoning effort, and the result comes back as a file you can read and act on. No copy-pasting into ChatGPT. No tab-switching.
# Copy to your Claude Code skills directory
mkdir -p ~/.claude/skills/codex
curl -sL https://raw.githubusercontent.com/199-biotechnologies/codex-skill/main/SKILL.md \
-o ~/.claude/skills/codex/SKILL.mdRequirements:
- Codex CLI installed (
npm install -g @openai/codex) - ChatGPT subscription (Plus, Pro, Business, or Enterprise) — logged in via
codex login
┌─────────────┐ prompt + context ┌─────────────┐
│ Claude Code │ ───────────────────────► │ Codex CLI │
│ (you work │ │ GPT-5.4 │
│ here) │ ◄─────────────────────── │ xhigh │
└─────────────┘ output .md file └─────────────┘
Claude writes a structured prompt, runs codex exec with your subscription auth, and reads the output file back. No API keys needed — it uses your ChatGPT login.
| Mode | What Happens | When to Use |
|---|---|---|
| Quick | Inline question, direct answer | "What does GPT think about X?" |
| Analysis | Claude curates context, Codex analyzes | Architecture review, verification, second opinion |
| Delegated | Codex explores the codebase autonomously | Code review, implementation, debugging, testing |
"Ask codex if this regex handles edge cases"
Fast, inline. Codex gets the question, no filesystem access.
"Get codex's opinion on our auth architecture"
Claude summarizes what it knows into a structured prompt. Codex analyzes without needing to read files — Claude is the context engineer.
"Have codex review this repo for security issues"
Codex gets full filesystem access and explores on its own. Use --full-auto for autonomous work.
Codex is adversarial by nature — it challenges assumptions and proposes alternatives. This is a feature, not a bug. But you need to frame prompts with clear scope and direction.
The skill includes a built-in prompting guide that teaches Claude how to:
- State the goal and direction first (so criticism is constructive)
- Scope the review explicitly (focus areas + what to leave alone)
- Pin decisions already made (don't re-litigate framework choices)
- Specify what kind of feedback you want (bugs? validation? improvements?)
This turns Codex from "confusing contrarian" into "focused expert reviewer."
The skill uses xhigh reasoning effort by default — the maximum available for GPT-5.4. It also uses Fast Mode (1.5x faster output via ChatGPT OAuth).
All output is saved to ~/.claude/subagent-results/codex-output-*.md for persistence across context compaction.
Say any of these in Claude Code to activate the skill:
- "use codex", "ask codex", "codex check"
- "double check with codex", "verify with codex"
- "codex review", "codex implement"
- "what does GPT think", "OpenAI's take"
- "cross-check with codex"
PRs welcome. If you've found a better prompting pattern or a flag combination that improves output, open a PR.
MIT