
Open-Source LLM Leaderboard: February 2026
Rankings of the best open-weight and open-source large language models in February 2026, including DeepSeek V3.2, Qwen 3.5, Llama 4 Maverick, GLM-5, and Mistral 3.

Rankings of the best open-weight and open-source large language models in February 2026, including DeepSeek V3.2, Qwen 3.5, Llama 4 Maverick, GLM-5, and Mistral 3.

Z.ai releases GLM-5, a 744B parameter open-source Mixture-of-Experts model purpose-built for agentic tasks, scoring 77.8% on SWE-bench Verified and 56.2% on Terminal-Bench 2.0.

A thorough review of DeepSeek V3.2, the 671B parameter MoE model that delivers frontier-level performance at dramatically lower cost with an MIT license.

A practical tutorial on running open-source language models locally using Ollama, llama.cpp, and LM Studio, with hardware requirements and model recommendations.

DeepSeek releases V3.2 under MIT license with 671B MoE architecture, matching GPT-5 at one-tenth the cost and achieving gold-medal performance on IMO and IOI competitions.

A comprehensive review of Meta's Llama 4 Maverick, a 400B parameter open-weight MoE model with 128 experts, 1M context, and multimodal capabilities.