·databricks-repl
{}

databricks-repl

Execute Python code on a Databricks cluster via a stateful REPL session. Use this skill when the user wants to run Python on Databricks, perform data analysis with Spark, train models on a cluster, query Unity Catalog tables, use sub_llm() for recursive LM calls, or any task requiring a persistent Databricks execution context. Always use a dedicated --project-dir (e.g., examples/my-task/) to isolate session.json and repl_outputs per task. Covers session lifecycle (create, exec, await, cancel, destroy), output file management, and eviction recovery.

5Installs·0Trend·@wedneyyuri

Installation

$npx skills add https://github.com/wedneyyuri/databricks-repl --skill databricks-repl

How to Install databricks-repl

Quickly install databricks-repl AI skill to your development environment via command line

  1. Open Terminal: Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.)
  2. Run Installation Command: Copy and run this command: npx skills add https://github.com/wedneyyuri/databricks-repl --skill databricks-repl
  3. Verify Installation: Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Source: wedneyyuri/databricks-repl.

SKILL.md

View raw

Execute Python on a Databricks cluster through a stateful REPL that follows the Recursive Language Model (RLM) pattern. The root LM (Claude Code) never sees auth tokens, connection boilerplate, raw polling, or large dataframes — it interacts with Databricks through a clean CLI that returns JSON metadata and file paths.

| Clean root context | Claude sees only JSON metadata and file paths — never tokens, boilerplate, or raw output | | Append-only execution | Commands are cells in an ordered log — never edited, only appended — enabling lineage tracking and eviction replay |

| Output via files | stdout/stderr land as files. Claude reads selectively to control what enters its context | | subllm() for recursion | On-cluster LLM calls for map/reduce, classification, summarization. No tools — pure text completions | | Everything recorded | .cmd.py files + session.json steps = full lineage |

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/wedneyyuri/databricks-repl --skill databricks-repl
Category
{}Data Analysis
Verified
First Seen
2026-02-26
Updated
2026-03-10

Browse more skills from wedneyyuri/databricks-repl

Quick answers

What is databricks-repl?

Execute Python code on a Databricks cluster via a stateful REPL session. Use this skill when the user wants to run Python on Databricks, perform data analysis with Spark, train models on a cluster, query Unity Catalog tables, use sub_llm() for recursive LM calls, or any task requiring a persistent Databricks execution context. Always use a dedicated --project-dir (e.g., examples/my-task/) to isolate session.json and repl_outputs per task. Covers session lifecycle (create, exec, await, cancel, destroy), output file management, and eviction recovery. Source: wedneyyuri/databricks-repl.

How do I install databricks-repl?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/wedneyyuri/databricks-repl --skill databricks-repl Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Where is the source repository?

https://github.com/wedneyyuri/databricks-repl

Details

Category
{}Data Analysis
Source
skills.sh
First Seen
2026-02-26