
Llama 4 Scout
Meta's Llama 4 Scout is a 109B-total, 17B-active MoE model with 16 experts and a 10M-token context window - the longest of any open-weight model - with native multimodal support for text and images.

Meta's Llama 4 Scout is a 109B-total, 17B-active MoE model with 16 experts and a 10M-token context window - the longest of any open-weight model - with native multimodal support for text and images.

A data-driven comparison of Alibaba's Qwen3.5-122B-A10B and Meta's Llama 4 Maverick - two open-weight MoE models with radically different approaches to parameter efficiency and benchmark performance.

David vs Goliath: Qwen3.5-35B-A3B activates 3B parameters and beats Llama 4 Scout's 17B active on MMLU-Pro, GPQA, and coding benchmarks - but Scout's 10M context window and native multimodal support tell a different story.

Meta will deploy up to 6 gigawatts of AMD Instinct GPUs across multiple generations in a deal worth up to $100 billion. AMD has issued Meta a warrant for 160 million shares - roughly 10% of the company - at a penny per share, tied to delivery milestones and AMD hitting $600.

Bridgewater Associates warns AI capex has entered a 'dangerous phase' as Alphabet, Amazon, Meta, and Microsoft commit $650 billion to infrastructure in 2026, up 67% from last year.

From pirated libraries to destroyed books to ancient manuscripts, AI companies have consumed millions of copyrighted works and are now approaching the limits of available human text. Here is what they used, what they stole, and what they are looking for next.