Back to Models
0X

0xSero/Qwen3.6-35B-A3B-GGUF-Strix

0xSerogeneral

Qwen3.6-35B-A3B — Strix Halo Optimised GGUFs

Dynamic mixed-precision GGUF quantizations of Qwen/Qwen3.6-35B-A3B, produced and benchmarked on a Framework Desktop with AMD Ryzen AI MAX+ 395 (Radeon 8060S, gfx1151, 128 GB UMA) running Vulkan via llama.cpp.

Variants

FileSizeprefill (t/s)decode (t/s)Notes
Qwen3.6-35B-A3B-Q8_0.gguf35 GB97552.7near-lossless reference
Qwen3.6-35B-A3B-Q6_K.gguf27 GB83062.2
Qwen3.6-35B-A3B-Q5_K_M.gguf24 GB94364.1
Qwen3.6-35B-A3B-Q4_K_M.gguf20 GB102170.2production sweet spot
Qwen3.6-35B-A3B-Q4_0.gguf19 GB106176.5fastest decode
Qwen3.6-35B-A3B-IQ4_NL.gguf19 GB89173.1
Qwen3.6-35B-A3B-DYNAMIC.gguf19 GB110064.0fastest prefill; mixed per-tensor quant

All numbers: pp=4096 tokens, tg=128 tokens; -fa 1 -ctk q8_0 -ctv q8_0 -ub 2048 -b 2048 on a single Vulkan gfx1151 device.

Dynamic mix recipe

DYNAMIC.gguf uses a per-tensor quantization map chosen for the hybrid Gated DeltaNet + Gated Attention architecture:

  • attn_k / attn_q / attn_vQ8_0 (retrieval-critical)
  • attn_outputQ5_K
  • ffn_gate_inp (router) → Q8_0 (routing-critical)
  • ffn_gate_exps / ffn_up_exps / ffn_down_exps (256 routed experts) → IQ4_NL
  • ffn_gate_shexp / ffn_up_shexp / ffn_down_shexp (shared expert) → Q6_K
  • token_embd / outputQ8_0
  • everything else → Q4_K_M (fallback)

Usage

llama-bench -m Qwen3.6-35B-A3B-DYNAMIC.gguf -ngl 99 -fa 1 -ctk q8_0 -ctv q8_0 \
  -ub 2048 -b 2048 -p 4096 -n 128

Benchmark context

Research series on pushing Qwen3.5/3.6 on AMD Strix Halo. Methodology, scripts, and live results: see the benchmark site referenced from the GitHub repo.

License

Apache 2.0 (inherited from base model).

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes16
Downloads
📝

No reviews yet

Be the first to review 0xSero/Qwen3.6-35B-A3B-GGUF-Strix!

Model Info

Provider0xSero
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes16
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor