·optimizing-attention-flash
</>

optimizing-attention-flash

orchestra-research/ai-research-skills

Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention.

15Installs·0Trend·@orchestra-research

Installation

$npx skills add https://github.com/orchestra-research/ai-research-skills --skill optimizing-attention-flash

SKILL.md

Flash Attention provides 2-4x speedup and 10-20x memory reduction for transformer attention through IO-aware tiling and recomputation.

Flash Attention uses float16/bfloat16 for speed. Float32 not supported.

Integration with HuggingFace Transformers: See references/transformers-integration.md for enabling Flash Attention in BERT, GPT, Llama models.

Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention. Source: orchestra-research/ai-research-skills.

View raw

Facts (cite-ready)

Stable fields and commands for AI/search citations.

Install command
npx skills add https://github.com/orchestra-research/ai-research-skills --skill optimizing-attention-flash
Category
</>Dev Tools
Verified
First Seen
2026-02-11
Updated
2026-02-18

Quick answers

What is optimizing-attention-flash?

Optimizes transformer attention with Flash Attention for 2-4x speedup and 10-20x memory reduction. Use when training/running transformers with long sequences (>512 tokens), encountering GPU memory issues with attention, or need faster inference. Supports PyTorch native SDPA, flash-attn library, H100 FP8, and sliding window attention. Source: orchestra-research/ai-research-skills.

How do I install optimizing-attention-flash?

Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/orchestra-research/ai-research-skills --skill optimizing-attention-flash Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code or Cursor

Where is the source repository?

https://github.com/orchestra-research/ai-research-skills