Ai training

16 Open-Source RL Libraries, One Shared GPU Bottleneck

16 Open-Source RL Libraries, One Shared GPU Bottleneck

A Hugging Face survey of 16 open-source reinforcement learning libraries finds the entire ecosystem has converged on async disaggregated training to fix a single brutal bottleneck: GPU idle time during long rollouts.

AWS Trainium2 - Amazon's Cloud Training Chip

AWS Trainium2 - Amazon's Cloud Training Chip

AWS Trainium2 is Amazon's second-generation custom AI training chip, powering EC2 Trn2 instances with 96GB HBM2e per chip and tight integration with the AWS Neuron SDK and SageMaker ecosystem.

Cerebras WSE-3 - The Wafer-Scale AI Engine

Cerebras WSE-3 - The Wafer-Scale AI Engine

The Cerebras Wafer-Scale Engine 3 is the largest chip ever built - an entire TSMC 5nm wafer with 900,000 AI cores, 44GB of on-chip SRAM, and 21 PB/s of memory bandwidth powering the CS-3 AI supercomputer.