semantic-caching
✓Redis semantic caching for LLM applications. Use when implementing vector similarity caching, optimizing LLM costs through cached responses, or building multi-level cache hierarchies.
Installation
SKILL.md
| 0.98-1.00 | 0.00-0.02 | Nearly identical | | 0.95-0.98 | 0.02-0.05 | Very similar | | 0.92-0.95 | 0.05-0.08 | Similar (default) | | 0.85-0.92 | 0.08-0.15 | Moderately similar |
| Threshold | Start at 0.92, tune based on hit rate | | TTL | 24h for production | | Embedding | text-embedding-3-small (fast) | | L1 size | 1000-10000 entries |
redis-vector-cache Keywords: redis, vector, embedding, similarity, cache Solves:
Redis semantic caching for LLM applications. Use when implementing vector similarity caching, optimizing LLM costs through cached responses, or building multi-level cache hierarchies. Source: yonatangross/orchestkit.
Facts (cite-ready)
Stable fields and commands for AI/search citations.
- Install command
npx skills add https://github.com/yonatangross/orchestkit --skill semantic-caching- Source
- yonatangross/orchestkit
- Category
- </>Dev Tools
- Verified
- ✓
- First Seen
- 2026-02-01
- Updated
- 2026-02-18
Quick answers
What is semantic-caching?
Redis semantic caching for LLM applications. Use when implementing vector similarity caching, optimizing LLM costs through cached responses, or building multi-level cache hierarchies. Source: yonatangross/orchestkit.
How do I install semantic-caching?
Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/yonatangross/orchestkit --skill semantic-caching Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code or Cursor
Where is the source repository?
https://github.com/yonatangross/orchestkit
Details
- Category
- </>Dev Tools
- Source
- skills.sh
- First Seen
- 2026-02-01