LLM Trust
AI Model Discovery
Platform
Home
Models
Categories
Compare
Blog
Newsletter
My Account
Dashboard
Categories
U
Guest
Sign in to sync
Back to Models
Mixtral 8x22B
141B
Sparse mixture-of-experts model with 141B total / 39B active parameters. Outstanding efficiency.
open-source
moe
efficient
apache-2.0
code
Download
Source
Overview
Specifications
Usage
Reviews (0)