
Mistral Small 4
Mistral AI's unified MoE model - 119B total parameters, 6B active per token, 128 experts, 256K context, configurable reasoning, Apache 2.0 license.

Mistral AI's unified MoE model - 119B total parameters, 6B active per token, 128 experts, 256K context, configurable reasoning, Apache 2.0 license.
![FLUX.2 [max]](https://awesomeagents.ai/images/models/flux-2-max_hu_c51afbf082a591e5.jpg)
Black Forest Labs' top-tier image model - highest quality, best prompt adherence, grounded generation with web context, and professional-grade editing consistency at $0.07 per megapixel.
![FLUX.2 [flex]](https://awesomeagents.ai/images/models/flux-2-flex_hu_7f5a1bc9621218c0.jpg)
Black Forest Labs' developer-controlled image model with adjustable steps and guidance - maximum precision for typography, UI mockups, and detail-critical workflows.
![FLUX.2 [pro]](https://awesomeagents.ai/images/models/flux-2-pro_hu_2b6a584fe281c164.jpg)
Black Forest Labs' production-grade image generation API - state-of-the-art quality at affordable pricing, optimized for commercial workflows with 4MP output.
![FLUX.2 [dev]](https://awesomeagents.ai/images/models/flux-2-dev_hu_94ec9e57c8624223.jpg)
Black Forest Labs' 32B open-weight image model - the most powerful open alternative for text-to-image, editing, and multi-reference generation with up to 10 reference images.
![FLUX.2 [klein] 9B](https://awesomeagents.ai/images/models/flux-2-klein-9b_hu_25add23b30e4a4ae.jpg)
Black Forest Labs' 9B parameter distilled image model - sub-second generation with higher quality than the 4B variant, 19.6 GB VRAM, non-commercial license.
![FLUX.2 [klein] 4B](https://awesomeagents.ai/images/models/flux-2-klein-4b_hu_dec233f8cc7116ef.jpg)
Black Forest Labs' fastest open-source image generation model - 4B parameters, Apache 2.0 license, sub-second generation on consumer GPUs with 13GB VRAM.

Italian-Legal-BERT is a 110M-parameter domain-adapted BERT model for Italian legal NLP, trained on 3.7GB of court decisions from Italy's National Jurisprudential Archive.

NVIDIA Nemotron 3 Super is a 120B-parameter open model with 12B active at inference, combining Mamba-2, LatentMoE, and Multi-Token Prediction for agentic workloads with a 1M token context window.

Grok 4 is xAI's frontier reasoning model, the first to break 50% on Humanity's Last Exam, with a 256K context window, $3/M input pricing, and a Heavy multi-agent variant built on 200,000 GPUs.

Community fine-tune that distills Claude Opus 4.6 reasoning into Qwen3.5-27B via LoRA. 28B parameters, Apache 2.0, no published benchmarks.

OpenAI's most capable frontier model combines native computer use, 1M-token context, and three variants at $2.50/$15 per million tokens.