TorchTitan is PyTorch's official platform for large-scale LLM pretraining with composable 4D parallelism (FSDP2, TP, PP, CP), achieving 65%+ speedups over baselines on H100 GPUs.
| Llama 3.1 | 8B, 70B, 405B | Production | | Llama 4 | Various | Experimental | | DeepSeek V3 | 16B, 236B, 671B (MoE) | Experimental | | GPT-OSS | 20B, 120B (MoE) | Experimental | | Qwen 3 | Various | Experimental | | Flux | Diffusion | Experimental |
| Model | GPUs | Parallelism | TPS/GPU | Techniques |
Fornisce preformazione LLM distribuito nativo di PyTorch utilizzando torchtitan con parallelismo 4D (FSDP2, TP, PP, CP). Da utilizzare durante il pre-addestramento di Llama 3.1, DeepSeek V3 o modelli personalizzati su scala da 8 a 512+ GPU con Float8, torch.compile e checkpoint distribuito. Fonte: orchestra-research/ai-research-skills.