·ai-stopping-hallucinations
{}

ai-stopping-hallucinations

Stop your AI from making things up. Use when your AI hallucinates, fabricates facts, isn't grounded in real data, doesn't cite sources, makes unsupported claims, or you need to verify AI responses against source material. Covers citation enforcement, faithfulness verification, grounding via retrieval, and confidence thresholds.

7Installs·0Trend·@lebsral

Installation

$npx skills add https://github.com/lebsral/dspy-programming-not-prompting-lms-skills --skill ai-stopping-hallucinations

How to Install ai-stopping-hallucinations

Quickly install ai-stopping-hallucinations AI skill to your development environment via command line

  1. Open Terminal: Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.)
  2. Run Installation Command: Copy and run this command: npx skills add https://github.com/lebsral/dspy-programming-not-prompting-lms-skills --skill ai-stopping-hallucinations
  3. Verify Installation: Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Source: lebsral/dspy-programming-not-prompting-lms-skills.

SKILL.md

View raw

Guide the user through making their AI factually grounded. The core principle: never trust a bare LM output — always verify against sources.

LMs generate plausible-sounding text, not verified facts. Hallucination happens when:

The fix isn't better prompting — it's programmatic constraints that force grounding.

Stop your AI from making things up. Use when your AI hallucinates, fabricates facts, isn't grounded in real data, doesn't cite sources, makes unsupported claims, or you need to verify AI responses against source material. Covers citation enforcement, faithfulness verification, grounding via retrieval, and confidence thresholds. Source: lebsral/dspy-programming-not-prompting-lms-skills.

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/lebsral/dspy-programming-not-prompting-lms-skills --skill ai-stopping-hallucinations
Category
{}Data Analysis
Verified
First Seen
2026-02-24
Updated
2026-03-11

Browse more skills from lebsral/dspy-programming-not-prompting-lms-skills

Quick answers

What is ai-stopping-hallucinations?

Stop your AI from making things up. Use when your AI hallucinates, fabricates facts, isn't grounded in real data, doesn't cite sources, makes unsupported claims, or you need to verify AI responses against source material. Covers citation enforcement, faithfulness verification, grounding via retrieval, and confidence thresholds. Source: lebsral/dspy-programming-not-prompting-lms-skills.

How do I install ai-stopping-hallucinations?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/lebsral/dspy-programming-not-prompting-lms-skills --skill ai-stopping-hallucinations Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Where is the source repository?

https://github.com/lebsral/dspy-programming-not-prompting-lms-skills

Details

Category
{}Data Analysis
Source
skills.sh
First Seen
2026-02-24