Mixtral of Experts
We introduce Mixtral 8×7B, a Sparse Mixture of Experts language model. Mixtral has the same architecture as Mistral 7B, with the difference that each layer is composed of 8 feedforward blocks (i.e. experts). For every token, at each layer, a router network selects two experts to process the current state and combine their outputs.
Even though each token only sees two experts, the selected experts can be different at each timestep. As a result, each token has access to 47B parameters but only uses 13B active parameters during inference.