
GLM-5.1 Tops SWE-Bench Pro With Zero NVIDIA Hardware
Z.ai's GLM-5.1 scores 58.4 on SWE-bench Pro, edging out GPT-5.4 and Claude Opus 4.6, after being trained on 100,000 Huawei Ascend chips with no US silicon.

Z.ai's GLM-5.1 scores 58.4 on SWE-bench Pro, edging out GPT-5.4 and Claude Opus 4.6, after being trained on 100,000 Huawei Ascend chips with no US silicon.

Huawei Atlas 350 specs, benchmarks, and analysis. Ascend 950PR chip, 112GB HiBL 1.0 HBM, 1.56 PFLOPS FP4, 600W - China's first domestically developed FP4-capable AI accelerator.

China announced up to $70 billion in semiconductor and AI subsidies during the Two Sessions - one of the largest government chip programs in history, aimed at full self-sufficiency as US export controls tighten.

Zhipu AI's GLM-5 is a 744B MoE model with 40B active parameters, trained on 100K Huawei Ascend chips, scoring 77.8% SWE-bench and 50 on Artificial Analysis Intelligence Index - MIT licensed.

China's National People's Congress opens this week with a 15th Five-Year Plan that puts $70 billion in semiconductor subsidies and AI-plus manufacturing at the center of its tech race with the West.

Huawei debuts its Atlas 950 SuperPoD at MWC Barcelona - 8,192 NPUs delivering 8 ExaFLOPS - marking its first overseas showcase of the AI supercomputer that directly targets Nvidia's cluster dominance.

Zhipu AI's 744B open-source model GLM-5 was trained entirely on Huawei Ascend chips and now competes with GPT-5.2 and Claude Opus on major benchmarks.

Awesome Agents launches a dedicated Hardware section with detailed spec pages for 21 GPUs, TPUs, and AI accelerators - from datacenter flagships to home lab favorites.

DeepSeek V4 is an unreleased trillion-parameter MoE model with ~32B active parameters, native multimodal capabilities, a 1M-token context window, and optimization for Huawei Ascend chips - expected in the first week of March 2026.

DeepSeek will release V4, a natively multimodal trillion-parameter model with a 1M token context window, in the first week of March - optimized for Huawei Ascend chips, not Nvidia.

Huawei Ascend 910B specs, benchmarks, and real-world performance. 64GB HBM2e, ~1,200 GB/s bandwidth, ~600 TFLOPS FP16 - the chip that trained DeepSeek.

Huawei Ascend 910C specs, benchmarks, and performance analysis. 96GB HBM2e, ~1,800 GB/s bandwidth, ~800 TFLOPS FP16 - China's flagship AI chip under US sanctions.