Back to Models
Visit Website
AesSedai/MiMo-V2.5-GGUF
AesSedai • generalNotes
- 04/28/26: While this model should run on the llama.cpp master branch, there was a small change to the inference code to support the
attention_value_scaleparameter. For the best accuacy/performance, I recommend pulling and compiling from this PR branch: https://github.com/ggml-org/llama.cpp/pull/22493.
Model
This is a text-only GGUF quantization of XiaomiMiMo/MiMo-V2.5. This means that image and audio input is not present in this GGUF, and will not be available until support is added upstream in llama.cpp.
This repo contains specialized MoE-quants for MiMo-V2.5. The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model, it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization. To that end, the quantization type default is kept in high quality and the FFN UP + FFN GATE tensors are quanted down along with the FFN DOWN tensors.
| Quant | Size | Mixture | PPL | 1-(Mean PPL(Q)/PPL(base)) | KLD |
|---|---|---|---|---|---|
| Q8_0 | 305.68 GiB (8.50 BPW) | Q8_0 | 5.135221 ± 0.030263 | +0.1229% | 0.012455 ± 0.000173 |
| Q5_K_M | 212.42 GiB (5.91 BPW) | Q8_0 / Q5_K / Q5_K / Q6_K | 5.142293 ± 0.030329 | +0.2608% | 0.014906 ± 0.000230 |
| Q4_K_M | 176.70 GiB (4.92 BPW) | Q8_0 / Q4_K / Q4_K / Q5_K | 5.204791 ± 0.030827 | +1.4793% | 0.020743 ± 0.000168 |
| IQ4_XS | 136.78 GiB (3.80 BPW) | Q8_0 / IQ3_S / IQ3_S / IQ4_XS | 5.270911 ± 0.031163 | +2.7685% | 0.041606 ± 0.000299 |
| IQ3_S | 105.33 GiB (2.93 BPW) | Q6_K / IQ2_S / IQ2_S / IQ3_S | 5.547074 ± 0.033209 | +8.1529% | 0.092593 ± 0.000568 |
