BEIJING: Chinese artificial intelligence startup DeepSeek on Thursday unveiled an upgraded version of its flagship V3 model, which the company says has been optimised to support Chinese-made chips while offering faster processing speeds.
In a statement on its official WeChat account, the company announced that the new DeepSeek-V3.1 features a UE8M0 FP8 precision format designed for “soon-to-be-released next-generation domestic chips”.
While DeepSeek did not specify which chip models or manufacturers would be supported, the move signals an effort to align its models with China’s growing semiconductor ecosystem as Beijing seeks to reduce reliance on American technology amid Washington’s export controls.
FP8, or 8-bit floating point, is a data-processing format that enables AI models to operate more efficiently by using less memory while maintaining faster performance.
According to DeepSeek, the upgraded model also employs a hybrid inference structure that allows it to switch between reasoning and non-reasoning modes.
Users can toggle between these modes through a “deep thinking” button available on the company’s app and web platform, both of which now run the V3.1 version.
The latest release marks the third update to DeepSeek’s models in recent months, following the R1 upgrade in May and a V3 enhancement in March.
The company also confirmed that it will adjust costs for developers using its API platform from 6 September.
DeepSeek first drew global attention in January 2025 with the launch of its R1 model, which it claimed could rival Western counterparts such as OpenAI, xAI, and Anthropic, but at significantly lower cost and with reduced computing requirements.
The announcement raised questions about the effectiveness of US export restrictions on advanced chips, often referred to as Washington’s “small yard, high fence” strategy.
Industry analysts noted that DeepSeek has managed to compensate for limited access to cutting-edge US hardware by focusing on algorithmic efficiency.
Techniques such as mixture-of-experts (MoE), selective activation, and transfer learning enable its models to generate responses faster and more cheaply once trained.
However, experts caution that China’s AI progress is not yet fully autonomous. DeepSeek has built on open-source research such as Meta’s LLaMA models and continues to rely on US-made Nvidia chips for training.
While Chinese chipmakers are making strides, they remain some distance from matching the technological sophistication of American AI processors, especially for compute-intensive pre-training tasks.