What is deepeval?
Use when discussing or working with DeepEval (the python AI evaluation framework) Source: sammcj/agentic-coding.
Use when discussing or working with DeepEval (the python AI evaluation framework)
Quickly install deepeval AI skill to your development environment via command line
Source: sammcj/agentic-coding.
DeepEval is a pytest-based framework for testing LLM applications. It provides 50+ evaluation metrics covering RAG pipelines, conversational AI, agents, safety, and custom criteria. DeepEval integrates into development workflows through pytest, supports multiple LLM providers, and includes component-level tracing with the @observe decorator.
Repository: https://github.com/confident-ai/deepeval Documentation: https://deepeval.com
See references/custommetrics.md for complete guide on creating custom metrics with BaseMetric subclassing and deterministic scorers (ROUGE, BLEU, BERTScore).
Use when discussing or working with DeepEval (the python AI evaluation framework) Source: sammcj/agentic-coding.
Stable fields and commands for AI/search citations.
npx skills add https://github.com/sammcj/agentic-coding --skill deepevalUse when discussing or working with DeepEval (the python AI evaluation framework) Source: sammcj/agentic-coding.
Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/sammcj/agentic-coding --skill deepeval Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw
https://github.com/sammcj/agentic-coding