Latest News
Anthropic Launches Institute as Powerful AI Looms

Anthropic Launches Institute as Powerful AI Looms

Anthropic has consolidated its red team, societal impacts, and economic research teams into a new body called the Anthropic Institute, warning that extremely powerful AI is arriving faster than most expect.

16 Open-Source RL Libraries, One Shared GPU Bottleneck

16 Open-Source RL Libraries, One Shared GPU Bottleneck

A Hugging Face survey of 16 open-source reinforcement learning libraries finds the entire ecosystem has converged on async disaggregated training to fix a single brutal bottleneck: GPU idle time during long rollouts.

View All News →
Guides View All →
Reviews View All →
Leaderboards View All →
Models View All →
NVIDIA Nemotron 3 Super 120B-A12B

NVIDIA Nemotron 3 Super 120B-A12B

NVIDIA Nemotron 3 Super is a 120B-parameter open model with 12B active at inference, combining Mamba-2, LatentMoE, and Multi-Token Prediction for agentic workloads with a 1M token context window.

Grok 4 - xAI's Flagship Reasoning Model

Grok 4 - xAI's Flagship Reasoning Model

Grok 4 is xAI's frontier reasoning model, the first to break 50% on Humanity's Last Exam, with a 256K context window, $3/M input pricing, and a Heavy multi-agent variant built on 200,000 GPUs.

Recent
Migrating from Pinecone to pgvector

Migrating from Pinecone to pgvector

How to move your vector search workload from Pinecone to PostgreSQL with pgvector, including schema mapping, data migration, and cost savings of up to 75%.

Switching from ChatGPT to Claude

Switching from ChatGPT to Claude

A practical guide to switching from ChatGPT Plus to Claude Pro, covering feature differences, memory transfer, usage limits, and workflow adjustments.

16 Open-Source RL Libraries, One Shared GPU Bottleneck

16 Open-Source RL Libraries, One Shared GPU Bottleneck

A Hugging Face survey of 16 open-source reinforcement learning libraries finds the entire ecosystem has converged on async disaggregated training to fix a single brutal bottleneck: GPU idle time during long rollouts.