Back to Models
ubergarm logo

ubergarm/Qwen3.6-27B-GGUF

ubergarmgeneral

ik_llama.cpp imatrix Quantizations of Qwen/Qwen3.6-27B

NOTE ik_llama.cpp can also run your existing GGUFs from bartowski, unsloth, mradermacher, etc if you want to try it out before downloading my quants. Only a couple quants in this collection are compatible with mainline llamma.cpp/LMStudio/KoboldCPP/etc as mentioned in the specific description, all others require ik_llama.cpp.

Some of ik's new quants are supported with Nexesenex/croco.cpp fork of KoboldCPP with Windows builds. Also check for ik_llama.cpp windows builds by Thireus here..

These quants provide best in class perplexity for the given memory footprint.

Big Thanks

Shout out to Wendell and the Level1Techs crew, the community Forums, YouTube Channel! BIG thanks for providing BIG hardware expertise and access to run these experiments and make these great quants available to the community!!!

Also thanks to all the folks in the quantizing and inferencing community on BeaverAI Club Discord and on r/LocalLLaMA for tips and tricks helping each other run, test, and benchmark all the fun new models! Thanks to huggingface for hosting all these big quants!

Finally, I really appreciate the support from aifoundry.org so check out their open source RISC-V based solutions!

Quant Collection

Perplexity computed against wiki.test.raw. (lower is "better")

Perplexity Chart KLD Chart

These two are just test quants for baseline perplexity comparison and not available for download here:

  • BF16 50.103 GiB (16.002 BPW)
    • PPL over 580 chunks for n_ctx=512 = 6.9066 +/- 0.04552
  • Q8_0 26.622 GiB (8.502 BPW)
    • PPL over 580 chunks for n_ctx=512 = 6.9063 +/- 0.04551

NOTE: If the models are split, the first file is much smaller and only contains metadata, that is on purpose, its fine!

IQ5_KS 18.532 GiB (5.919 BPW)

PPL over 580 chunks for n_ctx=512 = 6.9341 +/- 0.04578

This ik_llama.cpp exclusive quant is likely among the best quality available for 24GB full offload.

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# 64 Repeating Layers [0-63]

## Gated Attention/Delta Net [Blended 0-63]
blk\..*\.attn_gate\.weight=q6_0
blk\..*\.attn_qkv\.weight=q6_0
blk\..*\.attn_output\.weight=q6_0
blk\..*\.attn_q\.weight=q6_0
blk\..*\.attn_k\.weight=q6_0
blk\..*\.attn_v\.weight=q6_0
blk\..*\.ssm_alpha\.weight=q8_0
blk\..*\.ssm_beta\.weight=q8_0
blk\..*\.ssm_out\.weight=q8_0

# Dense Layers [0-63]
blk\..*\.ffn_down\.weight=iq5_ks
blk\..*\.ffn_(gate|up)\.weight=iq5_ks

# Non-Repeating Layers
token_embd\.weight=q6_0
output\.weight=q8_0
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

    #--dry-run \
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Qwen3.6-27B-GGUF/imatrix-Qwen3.6-27B-BF16.dat \
    /mnt/data/models/ubergarm/Qwen3.6-27B-GGUF/Qwen3.6-27B-BF16-00001-of-00002.gguf \
    /mnt/data/models/ubergarm/Qwen3.6-27B-GGUF/Qwen3.6-27B-IQ5_KS.gguf \
    IQ5_KS \
    128

smol-IQ4_NL 15.405 GiB (4.920 BPW)

PPL over 580 chunks for n_ctx=512 = 7.0040 +/- 0.04646

This mainline compatible custom mix using quantization types hopefully optimized for Vulkan/ROCm (and possibly Mac)?

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# 64 Repeating Layers [0-63]

## Gated Attention/Delta Net [Blended 0-63]
blk\..*\.attn_gate\.weight=iq4_nl
blk\..*\.attn_qkv\.weight=iq4_nl
blk\..*\.attn_output\.weight=iq4_nl
blk\..*\.attn_q\.weight=iq4_nl
blk\..*\.attn_k\.weight=iq4_nl
blk\..*\.attn_v\.weight=iq4_nl
blk\..*\.ssm_alpha\.weight=q8_0
blk\..*\.ssm_beta\.weight=q8_0
blk\..*\.ssm_out\.weight=q8_0

# Dense Layers [0-63]
blk\..*\.ffn_down\.weight=iq4_nl
blk\..*\.ffn_(gate|up)\.weight=iq4_nl

# Non-Repeating Layers
token_embd\.weight=iq4_nl
output\.weight=q8_0
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

    #--dry-run \
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Qwen3.6-27B-GGUF/imatrix-Qwen3.6-27B-BF16.dat \
    /mnt/data/models/ubergarm/Qwen3.6-27B-GGUF/Qwen3.6-27B-BF16-00001-of-00002.gguf \
    /mnt/data/models/ubergarm/Qwen3.6-27B-GGUF/Qwen3.6-27B-smol-IQ4_NL.gguf \
    IQ4_NL \
    128

IQ4_KS 14.693 GiB (4.693 BPW)

PPL over 580 chunks for n_ctx=512 = 6.9740 +/- 0.04599

👈 Secret Recipe
#!/usr/bin/env bash

custom="
# 64 Repeating Layers [0-63]

## Gated Attention/Delta Net [Blended 0-63]
blk\..*\.attn_gate\.weight=iq4_ks
blk\..*\.attn_qkv\.weight=iq4_ks
blk\..*\.attn_output\.weight=iq4_ks
blk\..*\.attn_q\.weight=iq4_ks
blk\..*\.attn_k\.weight=iq4_ks
blk\..*\.attn_v\.weight=iq4_ks
blk\..*\.ssm_alpha\.weight=q6_0
blk\..*\.ssm_beta\.weight=q6_0
blk\..*\.ssm_out\.weight=q6_0

# Dense Layers [0-63]
blk\..*\.ffn_down\.weight=iq4_ks
blk\..*\.ffn_(gate|up)\.weight=iq4_ks

# Non-Repeating Layers
token_embd\.weight=q6_0
output\.weight=q8_0
"

custom=$(
  echo "$custom" | grep -v '^#' | \
  sed -Ez 's:\n+:,:g;s:,$::;s:^,::'
)

    #--dry-run \
numactl -N ${SOCKET} -m ${SOCKET} \
./build/bin/llama-quantize \
    --custom-q "$custom" \
    --imatrix /mnt/data/models/ubergarm/Qwen3.6-27B-GGUF/imatrix-Qwen3.6-27B-BF16.dat \
    /mnt/data/models/ubergarm/Qwen3.6-27B-GGUF/Qwen3.6-27B-BF16-00001-of-00002.gguf \
    /mnt/data/models/ubergarm/Qwen3.6-27B-GGUF/Qwen3.6-27B-IQ4_KS-noimat.gguf \
    IQ4_KS \
    128

References

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes15
Downloads
📝

No reviews yet

Be the first to review ubergarm/Qwen3.6-27B-GGUF!

Model Info

Providerubergarm
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes15
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor