原標題:DeepSeek發布NSA:超快速長上下文訓練與推理的新突破
文章來源:小夏聊AIGC
內容字數:3860字
DeepSeek’s NSA: A Breakthrough in Accelerating AI Model Training and Inference
The field of artificial intelligence is constantly evolving,with a major focus on improving the speed and efficiency of large language models. DeepSeek,an AI company,has recently unveiled a significant advancement with its novel sparse attention mechanism,NSA (Native Sparse Attention). This innovative technology promises to revolutionize how we train and utilize AI models,particularly those dealing with long-context tasks.
Addressing the Bottleneck of Long-Context Processing
One of the biggest challenges in natural language processing is handling long sequences of text. Traditional attention mechanisms,while effective,become computationally expensive when dealing with lengthy contexts,often exceeding 64k tokens. This computational burden significantly slows down both training and inference,creating a bottleneck for the development of more powerful AI models. Existing sparse attention methods,while aiming to alleviate this issue,often fall short,lacking effectiveness in both training and inference phases,or suffering from compatibility issues with modern hardware.
NSA: A Multi-pronged Approach to Efficiency
DeepSeek’s NSA tackles these limitations head-on. Its core innovation lies in a three-component system: a dynamic hierarchical sparsity strategy,coarse-grained token compression,and fine-grained token selection. This integrated approach allows NSA to maintain both global context awareness and local precision,striking a crucial balance between efficiency and accuracy.
The architecture comprises three parallel attention branches: compressed attention,selective attention,and sliding window attention. Compressed attention captures coarse-grained semantic information by aggregating keys and values into block-level representations. Selective attention refines this by prioritizing important fine-grained information,assigning importance scores to blocks and selectively processing the highest-ranking ones. Finally,sliding window attention focuses on local contexts,preventing over-reliance on local patterns.
Hardware Optimization for Maximum Performance
NSA isn’t just a software solution; it’s designed with hardware in mind. DeepSeek leveraged Triton to create hardware-aligned sparse attention kernels,focusing on architectures that share KV caches,such as GQA and MQA. Optimizations include group-centric data loading,shared KV loading,and grid loop scheduling,resulting in near-optimal computational intensity balance.
Impressive Results Across Benchmarks
DeepSeek’s experiments using a 27B parameter model (with 3B active parameters) incorporating GQA and MoE demonstrated NSA’s superior performance. Across various benchmarks,the NSA-enhanced model outperformed all baselines,including the full-attention model,achieving top performance in seven out of nine metrics. In long-context tasks,NSA showed exceptionally high retrieval accuracy in “needle-in-a-haystack” tests with 64k contexts. On LongBench,it excelled in multi-hop QA and code understanding tasks. Furthermore,combining NSA with inference models through knowledge distillation and supervised fine-tuning enabled chain-of-thought reasoning in 32k-length mathematical reasoning tasks. In the AIME 24 benchmark,the sparse attention variant (NSA-R) significantly outperformed the full attention-R counterpart at both 8k and 16k context settings.
The speed improvements were remarkable. On an 8-GPU A100 system,NSA achieved up to 9x faster forward propagation and 6x faster backward propagation with 64k contexts. Decoding speed improved dramatically,reaching an astounding 11.6x speedup at 64k context length.
Conclusion and Future Directions
DeepSeek’s NSA represents a significant contribution to the open-source AI community,offering a promising path towards accelerating long-context modeling and its applications. While the results are impressive,the team acknowledges the potential for further optimization,particularly in refining the learning process of the sparse attention patterns and exploring more efficient hardware implementations. This breakthrough underscores the ongoing drive to make AI models faster,more efficient,and more accessible,paving the way for even more powerful and versatile AI systems in the future.
聯系作者
文章來源:小夏聊AIGC
作者微信:
作者簡介:專注于人工智能生成內容的前沿信息與技術分享。我們提供AI生成藝術、文本、音樂、視頻等領域的最新動態與應用案例。每日新聞速遞、技術解讀、行業分析、專家觀點和創意展示。期待與您一起探索AI的無限潛力。歡迎關注并分享您的AI作品或寶貴意見。