·quality-reflective-questions
</>

quality-reflective-questions

Provides reflective questioning framework to challenge assumptions about work completeness, catching incomplete implementations before they're marked "done". Use before claiming features complete, before moving ADRs to completed status, during self-review, or when declaring work finished. Triggers on "is this really done", "self-review my work", "challenge my assumptions", "verify completeness", or proactively before marking tasks complete. Works with any type of implementation work. Enforces critical thinking about integration, testing, and execution proof.

4Installs·0Trend·@dawiddutoit

Installation

$npx skills add https://github.com/dawiddutoit/custom-claude --skill quality-reflective-questions

How to Install quality-reflective-questions

Quickly install quality-reflective-questions AI skill to your development environment via command line

  1. Open Terminal: Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.)
  2. Run Installation Command: Copy and run this command: npx skills add https://github.com/dawiddutoit/custom-claude --skill quality-reflective-questions
  3. Verify Installation: Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Source: dawiddutoit/custom-claude.

SKILL.md

View raw

Before marking ANYTHING as "done", ask yourself these questions and provide HONEST answers:

If you cannot answer ALL FOUR with specific, concrete details, the work is NOT complete.

❌ Bad (vague): "It's integrated" → ✅ Good (specific): "Imported in builder.py line 45" ❌ Bad (vague): "It works" → ✅ Good (specific): "Logs show execution at 10:30:45" ❌ Bad (vague): "Tests pass" → ✅ Good (specific): "46 unit tests + 2 integration tests pass"

Provides reflective questioning framework to challenge assumptions about work completeness, catching incomplete implementations before they're marked "done". Use before claiming features complete, before moving ADRs to completed status, during self-review, or when declaring work finished. Triggers on "is this really done", "self-review my work", "challenge my assumptions", "verify completeness", or proactively before marking tasks complete. Works with any type of implementation work. Enforces critical thinking about integration, testing, and execution proof. Source: dawiddutoit/custom-claude.

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/dawiddutoit/custom-claude --skill quality-reflective-questions
Category
</>Dev Tools
Verified
First Seen
2026-02-25
Updated
2026-03-10

Browse more skills from dawiddutoit/custom-claude

Quick answers

What is quality-reflective-questions?

Provides reflective questioning framework to challenge assumptions about work completeness, catching incomplete implementations before they're marked "done". Use before claiming features complete, before moving ADRs to completed status, during self-review, or when declaring work finished. Triggers on "is this really done", "self-review my work", "challenge my assumptions", "verify completeness", or proactively before marking tasks complete. Works with any type of implementation work. Enforces critical thinking about integration, testing, and execution proof. Source: dawiddutoit/custom-claude.

How do I install quality-reflective-questions?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/dawiddutoit/custom-claude --skill quality-reflective-questions Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Where is the source repository?

https://github.com/dawiddutoit/custom-claude