This marked the first time a Chinese developer publicly validated the feasibility of using only Huawei chips to train AI models with MoE architecture, which has become widely adopted because of its ability to deliver high performance with fewer computational resources.
The Huawei stack was able to meet the “severe demands” of training large-scale MoE models across many different sizes, TeleAI researchers said.
“These contributions collectively address critical bottlenecks in frontier-scale model training, establishing a mature full-stack solution tailored to domestic computational ecosystems,” they said.

semiconductors, artificial intelligence, OpenAI, Mixture-of-Experts architecture, Zhipu AI, fintech, H200, iFlytek, AI models, Huawei, China, Beijing, China Telecom, US, TeleChat3 models, Alibaba Group Holding, TeleAI, MoE architecture, Ant Group, Nvidia, Tsinghua University, Ascend 910B chips, DeepSeek's V3 model, Advanced Micro Devices, MindSpore#China #Telecom #develops #countrys #MoE #models #trained #Huaweis #chips1768918715












