AI & Prompt Tools · Free tool
Open-Source LLM Tracker
Live tracker of 15 open-weight LLMs: Llama 3.3/4, Qwen 3.5, DeepSeek V3.2/R1, Kimi K2, Mistral Large 3, Gemma 3, Phi-4, SmolLM3. Filter by license.
Updated May 2026
| Model | Vendor | Params | License | Context | Best for |
|---|---|---|---|---|---|
| Llama 3.3 70B | Meta | 70B | llama | 128k | Battle-tested production deployments |
| Llama 4 Maverick | Meta | 402B MoE | llama | 1M | Frontier MoE; needs serious GPU |
| Qwen 3.5 72B | Alibaba | 72B | apache | 128k | Top open-weight on coding (SWE-bench) |
| Qwen 3.5 32B | Alibaba | 32B | apache | 128k | Sweet spot quality vs hardware |
| DeepSeek V3.2 | DeepSeek | 671B MoE | custom | 128k | Frontier-class on agentic coding |
| DeepSeek R1 | DeepSeek | 671B MoE | custom | 128k | Reasoning leader, open weights |
| DeepSeek-V3.2-Distill-Qwen-32B | DeepSeek | 32B | apache | 128k | Runs on single-GPU with R1-style reasoning |
| Kimi K2 | Moonshot | 1T MoE | custom | 1M | Long-context open-weight leader |
| Mistral Large 3 | Mistral | 123B | custom | 128k | EU-friendly; tool use |
| Mistral Medium 3 | Mistral | 30B | apache | 32k | Fits on a single H100 |
| Gemma 3 27B | 27B | custom | 128k | Google's open-weight; balanced | |
| Gemma 3 9B | 9B | custom | 128k | Fast inference for autocomplete-style tasks | |
| Phi-4 | Microsoft | 14B | mit | 32k | Highest-quality dense small model |
| Llama 3.2 3B | Meta | 3B | llama | 128k | Mobile / edge deployments |
| SmolLM3 3B | HuggingFace | 3B | apache | 64k | Tiny, fast, CPU-friendly |
License gotchas: Apache 2.0 and MIT are the most permissive. Llama license is mostly permissive but has 700M-MAU acceptable-use clauses. Custom (DeepSeek, Kimi, Gemma) often allows commercial use but check restrictions. Always read the license file in the model repo before shipping to production.
Found this useful?Email
Advertisement
What it does
15 open-weight LLMs tracked: Llama 3.3 / 4 Maverick, Qwen 3.5 (32B / 72B), DeepSeek V3.2 / R1, the DeepSeek-Distill-Qwen variant, Kimi K2, Mistral Large 3 / Medium 3, Gemma 3 (9B / 27B), Phi-4, Llama 3.2 3B, SmolLM3. Filter by license (Apache, MIT, Llama, Qwen, custom). Always read the license file before shipping a commercial product.
Embed this tool on your siteShow snippetHide
Paste this snippet into any page. Loads on-demand (lazy), no tracking scripts, and sized to most dashboards. Replace the height to fit your layout.
<iframe src="https://freetoolarena.com/embed/open-source-llm-tracker" width="100%" height="720" frameborder="0" loading="lazy" title="Open-Source LLM Tracker" style="border:1px solid #e2e8f0;border-radius:12px;max-width:720px;"></iframe>How to use it
- Filter by license type.
- Pick by params + context + use case.
See how this compares
- Head-to-headOllama vs llama.cppOllama vs llama.cpp head-to-head: ease of use, control, performance, model coverage. Pick by whether you want zero-config or full control.
- Head-to-headGroq vs CerebrasGroq vs Cerebras: ultra-fast AI inference providers. Tokens-per-second, models, pricing, when 1000+ tps changes your app design.
Advertisement