Back to Models
AesSedai logo

AesSedai/MiMo-V2.5-GGUF

AesSedaigeneral

Notes

  • 04/28/26: While this model should run on the llama.cpp master branch, there was a small change to the inference code to support the attention_value_scale parameter. For the best accuacy/performance, I recommend pulling and compiling from this PR branch: https://github.com/ggml-org/llama.cpp/pull/22493.

Model

This is a text-only GGUF quantization of XiaomiMiMo/MiMo-V2.5. This means that image and audio input is not present in this GGUF, and will not be available until support is added upstream in llama.cpp.

This repo contains specialized MoE-quants for MiMo-V2.5. The idea being that given the huge size of the FFN tensors compared to the rest of the tensors in the model, it should be possible to achieve a better quality while keeping the overall size of the entire model smaller compared to a similar naive quantization. To that end, the quantization type default is kept in high quality and the FFN UP + FFN GATE tensors are quanted down along with the FFN DOWN tensors.

QuantSizeMixturePPL1-(Mean PPL(Q)/PPL(base))KLD
Q8_0305.68 GiB (8.50 BPW)Q8_05.135221 ± 0.030263+0.1229%0.012455 ± 0.000173
Q5_K_M212.42 GiB (5.91 BPW)Q8_0 / Q5_K / Q5_K / Q6_K5.142293 ± 0.030329+0.2608%0.014906 ± 0.000230
Q4_K_M176.70 GiB (4.92 BPW)Q8_0 / Q4_K / Q4_K / Q5_K5.204791 ± 0.030827+1.4793%0.020743 ± 0.000168
IQ4_XS136.78 GiB (3.80 BPW)Q8_0 / IQ3_S / IQ3_S / IQ4_XS5.270911 ± 0.031163+2.7685%0.041606 ± 0.000299
IQ3_S105.33 GiB (2.93 BPW)Q6_K / IQ2_S / IQ2_S / IQ3_S5.547074 ± 0.033209+8.1529%0.092593 ± 0.000568

kld_graph ppl_graph

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes18
Downloads
📝

No reviews yet

Be the first to review AesSedai/MiMo-V2.5-GGUF!

Model Info

ProviderAesSedai
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes18
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor