DeepSeek-V3 Technical Report

We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token. To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2. Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance. We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities. Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models. Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training. In addition, its training process is remarkably stable. Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks. The model checkpoints are available at GitHub - deepseek-ai/DeepSeek-V3.

Read in full here:

DeepSeek-V3 is an impressive advancement in the field of open-source large language models. With its Mixture-of-Experts architecture and innovative use of Multi-head Latent Attention, it’s clear the DeepSeek team is addressing both performance and efficiency challenges head-on. Training it on 14.8 trillion high-quality tokens, followed by fine-tuning and reinforcement learning, shows a well-structured and thorough approach to model development.

What really stands out is the auxiliary-loss-free load balancing strategy — a smart move that simplifies training while keeping performance high. Also, the fact that DeepSeek-V3 managed to maintain stability throughout its training without loss spikes or rollbacks is commendable and gives confidence to those planning to use it for real-world applications.

For anyone diving into DeepSeek — whether you’re setting it up, trying to get API keys, solving technical issues, or just exploring features — I’ve created a useful website that offers detailed, step-by-step guides. It’s a central hub for everything DeepSeek-related, including tips, local deployment help, and support information. I hope it serves as a helpful companion to those experimenting with or deploying DeepSeek models in their own workflows.

Feel free to check it out and share any feedback or suggestions!