llm-training
✓Use when "training LLM", "finetuning", "RLHF", "distributed training", "DeepSpeed", "Accelerate", "PyTorch Lightning", "Ray Train", "TRL", "Unsloth", "LoRA training", "flash attention", "gradient checkpointing"
Installation
SKILL.md
Frameworks and techniques for training and finetuning large language models.
| Framework | Best For | Multi-GPU | Memory Efficient |
| Accelerate | Simple distributed | Yes | Basic | | DeepSpeed | Large models, ZeRO | Yes | Excellent | | PyTorch Lightning | Clean training loops | Yes | Good | | Ray Train | Scalable, multi-node | Yes | Good | | TRL | RLHF, reward modeling | Yes | Good | | Unsloth | Fast LoRA finetuning | Limited | Excellent |
Use when "training LLM", "finetuning", "RLHF", "distributed training", "DeepSpeed", "Accelerate", "PyTorch Lightning", "Ray Train", "TRL", "Unsloth", "LoRA training", "flash attention", "gradient checkpointing" Source: eyadsibai/ltk.
Facts (cite-ready)
Stable fields and commands for AI/search citations.
- Install command
npx skills add https://github.com/eyadsibai/ltk --skill llm-training- Source
- eyadsibai/ltk
- Category
- </>Dev Tools
- Verified
- ✓
- First Seen
- 2026-02-17
- Updated
- 2026-02-18
Quick answers
What is llm-training?
Use when "training LLM", "finetuning", "RLHF", "distributed training", "DeepSpeed", "Accelerate", "PyTorch Lightning", "Ray Train", "TRL", "Unsloth", "LoRA training", "flash attention", "gradient checkpointing" Source: eyadsibai/ltk.
How do I install llm-training?
Open your terminal or command line tool (Terminal, iTerm, Windows Terminal, etc.) Copy and run this command: npx skills add https://github.com/eyadsibai/ltk --skill llm-training Once installed, the skill will be automatically configured in your AI coding environment and ready to use in Claude Code or Cursor
Where is the source repository?
https://github.com/eyadsibai/ltk
Details
- Category
- </>Dev Tools
- Source
- skills.sh
- First Seen
- 2026-02-17