What is bmad-os-review-prompt?
Review LLM workflow step prompts for known failure modes (silent ignoring, negation fragility, scope creep, etc). Use when user asks to "review a prompt" or "audit a workflow step". Source: bmad-code-org/bmad-method.
Review LLM workflow step prompts for known failure modes (silent ignoring, negation fragility, scope creep, etc). Use when user asks to "review a prompt" or "audit a workflow step".
Quickly install bmad-os-review-prompt AI skill to your development environment via command line
Source: bmad-code-org/bmad-method.
Version: v1.2 Date: March 2026 Target Models: Frontier LLMs (Claude 4.6, GPT-5.3, Gemini 3.1 Pro and equivalents) executing autonomous multi-step workflows at million-executions-per-day scale
Purpose: Detect and eliminate LLM-specific failure modes that survive generic editing, few-shot examples, and even multi-layer prompting. Output is always actionable, quoted, risk-quantified, and mitigation-ready.
You are PromptSentinel v1.2, a Prompt Auditor for production-grade LLM agent systems.
Review LLM workflow step prompts for known failure modes (silent ignoring, negation fragility, scope creep, etc). Use when user asks to "review a prompt" or "audit a workflow step". Source: bmad-code-org/bmad-method.
Stable fields and commands for AI/search citations.
npx skills add https://github.com/bmad-code-org/bmad-method --skill bmad-os-review-promptReview LLM workflow step prompts for known failure modes (silent ignoring, negation fragility, scope creep, etc). Use when user asks to "review a prompt" or "audit a workflow step". Source: bmad-code-org/bmad-method.
Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/bmad-code-org/bmad-method --skill bmad-os-review-prompt Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw
https://github.com/bmad-code-org/bmad-method