·databricks-pipelines
{}

databricks-pipelines

Develop Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables) on Databricks. Use when building batch or streaming data pipelines with Python or SQL. Invoke BEFORE starting implementation.

9Installs·4Trend·@databricks

Installation

$npx skills add https://github.com/databricks/databricks-agent-skills --skill databricks-pipelines

How to Install databricks-pipelines

Quickly install databricks-pipelines AI skill to your development environment via command line

  1. Open Terminal: Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.)
  2. Run Installation Command: Copy and run this command: npx skills add https://github.com/databricks/databricks-agent-skills --skill databricks-pipelines
  3. Verify Installation: Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Source: databricks/databricks-agent-skills.

SKILL.md

View raw

FIRST: Use the parent databricks skill for CLI basics, authentication, profile selection, and data discovery commands.

Use this tree to determine which dataset type and features to use. Multiple features can apply to the same dataset — e.g., a Streaming Table can use Auto Loader for ingestion, Append Flows for fan-in, and Expectations for data quality. Choose the dataset type first, then layer on applicable features.

Pipelines use a default catalog and schema configured in the pipeline settings. All datasets are published there unless overridden.

Develop Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables) on Databricks. Use when building batch or streaming data pipelines with Python or SQL. Invoke BEFORE starting implementation. Source: databricks/databricks-agent-skills.

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/databricks/databricks-agent-skills --skill databricks-pipelines
Category
{}Data Analysis
Verified
First Seen
2026-03-10
Updated
2026-03-10

Browse more skills from databricks/databricks-agent-skills

Quick answers

What is databricks-pipelines?

Develop Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables) on Databricks. Use when building batch or streaming data pipelines with Python or SQL. Invoke BEFORE starting implementation. Source: databricks/databricks-agent-skills.

How do I install databricks-pipelines?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/databricks/databricks-agent-skills --skill databricks-pipelines Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code, Cursor, or OpenClaw

Where is the source repository?

https://github.com/databricks/databricks-agent-skills

Details

Category
{}Data Analysis
Source
skills.sh
First Seen
2026-03-10