
Unsafe Agents, Rising AI Tides, and Training Traps
Three new papers on agent prompt injection attack rates, MIT's broad-based AI automation finding, and a silent normalization-optimizer coupling failure in LLM training.

Three new papers on agent prompt injection attack rates, MIT's broad-based AI automation finding, and a silent normalization-optimizer coupling failure in LLM training.

Side-by-side fine-tuning costs for OpenAI, Google, Together AI, Fireworks, Mistral, and self-hosted GPU options with LoRA vs full training breakdowns.

Fine-tuning trains a pre-built AI model on your own data so it learns your specific task, tone, or domain - here is how it works, what it costs, and when to use it.

Anthropic launched the Claude Certified Architect exam and invested $100M in its Partner Network - free for the first 5,000 partner employees, $99 after. Accenture is training 30,000 people.

Mistral's new Forge platform lets enterprises train frontier-grade AI models entirely on proprietary data, without sending any of it to a third party.

Nvidia commits a gigawatt of Vera Rubin chips to Mira Murati's startup, a supply the FT values at tens of billions of dollars, alongside an undisclosed cash investment.

A Hugging Face survey of 16 open-source reinforcement learning libraries finds the entire ecosystem has converged on async disaggregated training to fix a single brutal bottleneck: GPU idle time during long rollouts.

Andrew Ng says AGI is decades away and the real AI bubble risk is in the training layer - not inference. We examine both claims against the data.

Max Schwarzer, VP of Research and Head of Post-Training at OpenAI, leaves after a year leading the team that shipped GPT-5, 5.1, 5.2, and 5.3-Codex to return to RL research at Anthropic.

Complete specs, benchmarks, and analysis of the NVIDIA Rubin R200 GPU - the post-Blackwell flagship with 288GB HBM4, 22 TB/s bandwidth, and 50 PFLOPS FP4.

AWS Trainium2 is Amazon's second-generation custom AI training chip, powering EC2 Trn2 instances with 96GB HBM2e per chip and tight integration with the AWS Neuron SDK and SageMaker ecosystem.

The Cerebras Wafer-Scale Engine 3 is the largest chip ever built - an entire TSMC 5nm wafer with 900,000 AI cores, 44GB of on-chip SRAM, and 21 PB/s of memory bandwidth powering the CS-3 AI supercomputer.