·skill-evaluator
</>

skill-evaluator

Evaluate Claude Code skill quality across 6 weighted dimensions: frontmatter quality, trigger coverage, structural completeness, content depth, consistency and integrity, and CONTRIBUTING.md compliance. Produces scored audit reports with severity-classified findings and actionable recommendations. Two modes: (1) Quick Audit — single skill, full per-dimension report with weighted scoring; (2) Full Audit — all skills in repo, comparative ranking table plus per-skill summaries. Triggers on: "evaluate skill", "audit skill quality", "score skill", "skill review", "check skill completeness", "rate this skill", "skill quality check", "grade skill", "assess skill", "skill audit", "how good is this skill", or when a user asks for feedback on a SKILL.md file. Use this skill when reviewing skills before deployment, comparing skill quality across a repo, or diagnosing why a skill fails to activate on relevant queries.

16Installs·1Trend·@mathews-tom

Installation

$npx skills add https://github.com/mathews-tom/praxis-skills --skill skill-evaluator

How to Install skill-evaluator

Quickly install skill-evaluator AI skill to your development environment via command line

  1. Open Terminal: Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.)
  2. Run Installation Command: Copy and run this command: npx skills add https://github.com/mathews-tom/praxis-skills --skill skill-evaluator
  3. Verify Installation: Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Source: mathews-tom/praxis-skills.

SKILL.md

View raw

Skills that do not activate on relevant queries waste the entire investment in writing them. A skill can have deep, well-structured content and still deliver zero value if its frontmatter description lacks the trigger phrases users actually type. Quality evaluation catches trigger gaps, missing sections, and shallow content before deployment — turning

a skill from a static document into a reliable tool.

| references/evaluation-rubric.md | Detailed 1-5 scoring criteria per dimension, weight justifications, worked examples for calibration |

Evaluate Claude Code skill quality across 6 weighted dimensions: frontmatter quality, trigger coverage, structural completeness, content depth, consistency and integrity, and CONTRIBUTING.md compliance. Produces scored audit reports with severity-classified findings and actionable recommendations. Two modes: (1) Quick Audit — single skill, full per-dimension report with weighted scoring; (2) Full Audit — all skills in repo, comparative ranking table plus per-skill summaries. Triggers on: "evaluate skill", "audit skill quality", "score skill", "skill review", "check skill completeness", "rate this skill", "skill quality check", "grade skill", "assess skill", "skill audit", "how good is this skill", or when a user asks for feedback on a SKILL.md file. Use this skill when reviewing skills before deployment, comparing skill quality across a repo, or diagnosing why a skill fails to activate on relevant queries. Source: mathews-tom/praxis-skills.

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/mathews-tom/praxis-skills --skill skill-evaluator
Category
</>Dev Tools
Verified
First Seen
2026-02-25
Updated
2026-03-10

Browse more skills from mathews-tom/praxis-skills

Quick answers

What is skill-evaluator?

Evaluate Claude Code skill quality across 6 weighted dimensions: frontmatter quality, trigger coverage, structural completeness, content depth, consistency and integrity, and CONTRIBUTING.md compliance. Produces scored audit reports with severity-classified findings and actionable recommendations. Two modes: (1) Quick Audit — single skill, full per-dimension report with weighted scoring; (2) Full Audit — all skills in repo, comparative ranking table plus per-skill summaries. Triggers on: "evaluate skill", "audit skill quality", "score skill", "skill review", "check skill completeness", "rate this skill", "skill quality check", "grade skill", "assess skill", "skill audit", "how good is this skill", or when a user asks for feedback on a SKILL.md file. Use this skill when reviewing skills before deployment, comparing skill quality across a repo, or diagnosing why a skill fails to activate on relevant queries. Source: mathews-tom/praxis-skills.

How do I install skill-evaluator?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/mathews-tom/praxis-skills --skill skill-evaluator Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Where is the source repository?

https://github.com/mathews-tom/praxis-skills