·ollama-local

Local LLM inference with Ollama. Use when setting up local models for development, CI pipelines, or cost reduction. Covers model selection, LangChain integration, and performance tuning.

4Installs·0Trend·@yonatangross

Installation

$npx skills add https://github.com/yonatangross/skillforge-claude-plugin --skill ollama-local

SKILL.md

Run LLMs locally for cost savings, privacy, and offline development.

| Reasoning | deepseek-r1:70b | 42GB | GPT-4 level | | Coding | qwen2.5-coder:32b | 35GB | 73.7% Aider benchmark | | Embeddings | nomic-embed-text | 0.5GB | 768 dims, fast | | General | llama3.2:70b | 40GB | Good all-around |

| Cloud APIs | $675/month | 200-500ms | | Ollama Local | $50 (electricity) | 50-200ms | | Savings | 93% | 2-3x faster |

Local LLM inference with Ollama. Use when setting up local models for development, CI pipelines, or cost reduction. Covers model selection, LangChain integration, and performance tuning. Source: yonatangross/skillforge-claude-plugin.

View raw

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/yonatangross/skillforge-claude-plugin --skill ollama-local
Category
</>Dev Tools
Verified
First Seen
2026-02-01
Updated
2026-02-18

Quick answers

What is ollama-local?

Local LLM inference with Ollama. Use when setting up local models for development, CI pipelines, or cost reduction. Covers model selection, LangChain integration, and performance tuning. Source: yonatangross/skillforge-claude-plugin.

How do I install ollama-local?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/yonatangross/skillforge-claude-plugin --skill ollama-local Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code or Cursor

Where is the source repository?

https://github.com/yonatangross/skillforge-claude-plugin