- Published at
DeepSeek-R1: large-scale reinforcement learning with cold start

DeepSeek R1 first-generation reasoning models, achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks
Table of Contents
DeepSeek-R1
DeepSeek-R1 introduces DeepSeek’s first-generation reasoning models: DeepSeek-R1-Zero and DeepSeek-R1.
- DeepSeek-R1-Zero: a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing.
- DeepSeek-R1: Improves R1-Zero’s capabilities by adding a cold-start data stage. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
Highlights
- Emphasizes reasoning skills.
- Achieves strong performance with minimal human supervision.
- Also releases smaller distilled models based on Qwen and Llama.
Model Summary
Post-Training: Large-Scale Reinforcement Learning on the Base Model
-
DeepSeek directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
-
DeepSeek introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model’s reasoning and non-reasoning capabilities.
Distillation: Smaller Models Can Be Powerful Too
-
DeepSeek demonstrates that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models.
-
Using the reasoning data generated by DeepSeek-R1, DeepSeek fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. DeepSeek open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
Models and Downloads
- Models are available for research and commercial use.
- Check the GitHub Repo for model links and usage instructions.
License
Models are released under DeepSeek’s custom model license.