Back to Models
BR

Brooooooklyn/Qwen3.6-27B-UD-Q6_K_XL-mlx

Brooooooklyngeneral

Qwen3.6-27B — UD-Q6_K_XL (mlx-node)

6-bit base mixed-precision quantization of Qwen/Qwen3.6-27B for Apple Silicon, using the Unsloth Dynamic quantization strategy via mlx-node.

Original (BF16)This Model
Size~51 GB27 GB
FormatSafeTensors (sharded)SafeTensors (sharded)
PrecisionBF16 uniformMixed 6-bit + BF16

All Variants

RepoGGUF EquivalentSizeDecode (tok/s)Speedup vs BF16
Brooooooklyn/Qwen3.6-27B-UD-Q2_K_XL-mlxUD-Q2_K_XL15 GB18.63.32x
Brooooooklyn/Qwen3.6-27B-UD-Q3_K_XL-mlxUD-Q3_K_XL18 GB15.52.77x
Brooooooklyn/Qwen3.6-27B-UD-Q4_K_XL-mlxUD-Q4_K_XL21 GB13.92.48x
Brooooooklyn/Qwen3.6-27B-UD-Q5_K_XL-mlxUD-Q5_K_XL25 GB12.02.14x
Brooooooklyn/Qwen3.6-27B-UD-Q6_K_XL-mlxUD-Q6_K_XL27 GB10.81.93x
Brooooooklyn/Qwen3.6-27B-UD-Q8_K_XL-mlxUD-Q8_K_XL30 GB9.91.77x

Benchmarked on Apple M3 Max 128GB via examples/lm.ts (Turn 4 steady-state decode).

Performance

ModelSizeDecode (tok/s)Speedup
BF16 (unquantized)51 GB5.6baseline
This model (UD-Q6_K_XL)27 GB10.81.93x faster

Decode is memory-bandwidth bound on Apple Silicon — fewer bytes per token directly translates to higher throughput. The hybrid architecture interleaves linear attention (gated delta net, 48/64 layers) with full attention (16/64 layers).

Per-Tensor Bit Assignments (N=6)

WeightBitsRationale
embed_tokens8-bitKLD ~0.15 — very low sensitivity
lm_head8-bitKLD ~0.05 — safest tensor
self_attn.q/k/v_proj8-bit + AWQKLD ~1.5–2.9, AWQ via layernorm
linear_attn.in_proj_qkv/z8-bit + AWQKLD ~2.9, AWQ via layernorm
self_attn.o_projbf16NOT AWQ-correctable
linear_attn.out_projbf16KLD ~6.0 — worst tensor
down_proj8-bit"Slightly more sensitive" (snap N+1=7 → 8)
gate_proj, up_proj6-bitbase bits
GDN params (A_log, etc)bf16State-space dynamics

Quantization Strategy

Based on Unsloth Dynamic 2.0 per-tensor KLD analysis. Sensitive layers get higher bits with AWQ correction, while the bulk of FFN weights are aggressively quantized. imatrix AWQ pre-scaling amplifies important weight channels and fuses inverse scales into preceding layer norms (zero inference overhead).

AWQ-correctable projections (q/k/v, in_proj_qkv/z) are quantized at 8-bit via input_layernorm. Non-AWQ-correctable projections (o_proj, out_proj) are kept at bf16 — their inputs come from attention/GDN computation, not from a norm layer.

Architecture

ParameterValue
Total parameters27.4B (dense — all active)
Hidden size5,120
Layers64 (48 linear + 16 full attention)
Attention heads24 (4 KV heads, GQA 6:1)
Head dimension256
Intermediate size17,408
Vocab size248,320
Max context262,144 tokens

Usage

import { loadSession } from '@mlx-node/lm';

const session = await loadSession('./Qwen3.6-27B-UD-Q6_K_XL-mlx');

for await (const event of session.sendStream('Explain the hybrid attention mechanism in Qwen3.6.', {
  config: { maxNewTokens: 2048, temperature: 0.6, reasoningEffort: 'low' },
})) {
  if (!event.done) process.stdout.write(event.text);
}

How It Was Made

mlx convert \
  -i Qwen3.6-27B \
  -o Qwen3.6-27B-UD-Q6_K_XL-mlx \
  -q --q-bits 6 --q-recipe unsloth \
  --imatrix-path imatrix_unsloth.gguf

Acknowledgments

License

Apache 2.0 (inherited from base model).

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes5
Downloads
📝

No reviews yet

Be the first to review Brooooooklyn/Qwen3.6-27B-UD-Q6_K_XL-mlx!

Model Info

ProviderBrooooooklyn
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes5
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor