Mo e

Google Gemma 4 - Four Open Models Under Apache 2.0

Google Gemma 4 - Four Open Models Under Apache 2.0

Gemma 4 is Google DeepMind's most capable open model family: four variants from 2B to 31B, Apache 2.0 license, multimodal across text/image/video/audio, and the 31B Dense ranking #3 on Chatbot Arena against all open-weight models globally.

Xiaomi MiMo-V2-Pro - Agentic 1T MoE Model

Xiaomi MiMo-V2-Pro - Agentic 1T MoE Model

Xiaomi's MiMo-V2-Pro is a 1-trillion-parameter MoE model with 42B active params, 1M context, and agentic coding performance that rivals Claude Sonnet 4.6 at a fraction of the cost.

Mistral Small 4

Mistral Small 4

Mistral AI's unified MoE model - 119B total parameters, 6B active per token, 128 experts, 256K context, configurable reasoning, Apache 2.0 license.

NVIDIA Nemotron 3 Super 120B-A12B

NVIDIA Nemotron 3 Super 120B-A12B

NVIDIA Nemotron 3 Super is a 120B-parameter open model with 12B active at inference, combining Mamba-2, LatentMoE, and Multi-Token Prediction for agentic workloads with a 1M token context window.