What is nebius-batch-synthetic?
Generate synthetic training data using Nebius Token Factory batch inference (50% cheaper than real-time, async, no rate-limit impact). Use this skill whenever the user wants to run batch inference on Nebius, generate synthetic datasets at scale, create instruction-tuning data, run async LLM jobs on large prompt sets, or export batch results as fine-tuning JSONL. Trigger for phrases like "generate synthetic data with Nebius", "run batch inference", "create training data at scale", "async LLM generation", "batch job on Token Factory", "generate QA pairs", or any question about bulk/offline inference or synthetic data pipelines on Nebius. Source: arindam200/nebius-skills.