Huawei

GLM-5 - China's 744B Open-Source Frontier Model

GLM-5 - China's 744B Open-Source Frontier Model

Zhipu AI's GLM-5 is a 744B MoE model with 40B active parameters, trained on 100K Huawei Ascend chips, scoring 77.8% SWE-bench and 50 on Artificial Analysis Intelligence Index - MIT licensed.

China Maps AI Dominance in $70B Five-Year Plan

China Maps AI Dominance in $70B Five-Year Plan

China's National People's Congress opens this week with a 15th Five-Year Plan that puts $70 billion in semiconductor subsidies and AI-plus manufacturing at the center of its tech race with the West.

Huawei Takes Atlas 950 Global to Challenge Nvidia

Huawei Takes Atlas 950 Global to Challenge Nvidia

Huawei debuts its Atlas 950 SuperPoD at MWC Barcelona - 8,192 NPUs delivering 8 ExaFLOPS - marking its first overseas showcase of the AI supercomputer that directly targets Nvidia's cluster dominance.

DeepSeek V4

DeepSeek V4

DeepSeek V4 is an unreleased trillion-parameter MoE model with ~32B active parameters, native multimodal capabilities, a 1M-token context window, and optimization for Huawei Ascend chips - expected in the first week of March 2026.

Huawei Ascend 910B

Huawei Ascend 910B

Huawei Ascend 910B specs, benchmarks, and real-world performance. 64GB HBM2e, ~1,200 GB/s bandwidth, ~600 TFLOPS FP16 - the chip that trained DeepSeek.

Huawei Ascend 910C

Huawei Ascend 910C

Huawei Ascend 910C specs, benchmarks, and performance analysis. 96GB HBM2e, ~1,800 GB/s bandwidth, ~800 TFLOPS FP16 - China's flagship AI chip under US sanctions.