Reinforcement Learning from Human Feedback (RLHF) is a technique for aligning language models with human preferences. Rather than relying solely on next-token prediction, RLHF uses human judgment to guide model behavior toward helpful, harmless, and honest outputs.
Pretraining produces models that predict likely text, not necessarily good text. A model trained on internet data learns to complete text in ways that reflect its training distribution—including toxic, unhelpful, or dishonest patterns. RLHF addresses this gap by optimizing for human preferences rather than likelihood.
The core insight: humans can often recognize good outputs more easily than they can specify what makes an output good. RLHF exploits this by collecting human judgments and using them to shape model behavior.
Понимание обучения с подкреплением на основе обратной связи с человеком (RLHF) для согласования языковых моделей. Используйте при изучении данных о предпочтениях, моделировании вознаграждений, оптимизации политик или алгоритмах прямого согласования, таких как DPO. Источник: itsmostafa/llm-engineering-skills.