Skip to content

QA Skill for Codex and Claude

Agent workflow

The /qa skill gives Codex and Claude a natural-language front door into impact-gate-qa. Instead of memorizing flags, you can tell the agent what to test, where the app is running, and whether you want fixes or a pure report.

Full story

If you want the full product story from diff to browser exploration to generated and healed specs, start with the Autonomous Browser QA guide.

Mental model

The skill wraps the autonomous QA agent, not a separate implementation

When you invoke /qa, the skill determines the mode, finds the app URL, chooses fix settings, and then runs impact-gate-qa underneath. The final report still comes from the same browser-driven QA pipeline.

Typical mapping

Natural language in, QA command out

/qa test this app at http://localhost:3000 → impact-gate-qa pr --base-url http://localhost:3000 /qa hunt "checkout flow" on https://staging.example.com → impact-gate-qa hunt "checkout flow" --base-url https://staging.example.com

Install The Skill

Codex

Install the repo skill into your Codex skills directory

mkdir -p "$CODEX_HOME/skills/qa" cp skills/qa/SKILL.md "$CODEX_HOME/skills/qa/SKILL.md"

If you keep a shared agent workspace, symlinking the file works too.

Claude

Install the same skill into your Claude repo or user skill directory

mkdir -p .claude/skills/qa cp skills/qa/SKILL.md .claude/skills/qa/SKILL.md

The same SKILL.md can be reused; the wrapper behavior stays consistent across both agents.

What You Need First

Requirements

A reachable app plus the browser QA tooling

  • a running local, staging, or preview URL
  • agent-browser installed and on PATH
  • impact-gate-qa available through npx or a global install
  • ANTHROPIC_API_KEY for the exploratory browser loop and fix workflow

Use the QA skill in Codex

Codex examples

Ask for the outcome you want

/qa test this app at http://localhost:3000 /qa hunt "account settings" on http://127.0.0.1:5173 /qa run a release regression on https://staging.example.com /qa smoke-test this branch but do not apply fixes

Use the skill when you want Codex to:

  • choose between pr, hunt, release, and fix modes for you
  • translate natural-language scope into an impact-gate-qa invocation
  • run the QA agent, then summarize the resulting health score, verdict, and top findings

Use the QA skill in Claude

Claude examples

The same prompts work well in Claude flows

/qa test the current branch against http://localhost:3000 /qa hunt "checkout flow" on https://staging.example.com /qa run a full regression on the staging URL and compare to baseline /qa verify the healed tests only

The useful mental model is the same: the skill is the ergonomic entry point, and impact-gate-qa is the engine underneath it.

Mode Mapping

You ask forThe skill usually chooses
test the current branchpr
hunt a specific feature or flowhunt
run a full regression or release checkrelease
verify healed testsfix

If the request includes a specific area like “account settings” or “checkout flow,” the skill typically maps that to hunt. If the request says “release,” “regression,” or “full test,” it typically maps that to release.

A Good First Run

Safest first run

Start with a report-only pass before you let the loop fix anything

/qa test this app at http://localhost:3000 but do not apply fixes → impact-gate-qa pr --base-url http://localhost:3000 --no-fix

That gives you:

  • a health score
  • a GO / NO-GO / CONDITIONAL verdict
  • categorized findings
  • screenshots and before/after evidence
  • a structured QA report in .e2e-ai-agents/

Artifacts the skill produces

Summary

Human-readable summary

.e2e-ai-agents/qa-summary.md

Human-readable verdict, health score, and top issues.

Structured report

Machine-readable report

.e2e-ai-agents/qa-report.json

Machine-readable findings, categories, regression deltas, and fix results.

When to use the skill instead of the CLI

Best fit

Use the skill when you want the agent to translate intent into the right QA run

  • you are already working inside Codex or Claude
  • you want natural-language QA requests instead of hand-building flags
  • you want the final QA report summarized back into the same agent conversation

If you already know the exact mode and flags you want, running impact-gate-qa directly is still perfectly fine. The skill exists to make the workflow easier to invoke and easier to explain to teams.