Back to Models
BR

Brooooooklyn/Qwen3.6-35B-A3B-UD-Q3_K_XL-mlx

Brooooooklyngeneral

Qwen3.6-35B-A3B — UD-Q3_K_XL (mlx-node)

3-bit base mixed-precision quantization of Qwen/Qwen3.6-35B-A3B for Apple Silicon, using the Unsloth Dynamic quantization strategy via mlx-node.

Original (BF16)This Model
Size~66 GB18 GB
FormatSafeTensors (sharded)SafeTensors (sharded)
PrecisionBF16 uniformMixed 3/…/8-bit + BF16

All Variants

RepoGGUF EquivalentSizeDecode (tok/s)Speedup vs BF16
Brooooooklyn/Qwen3.6-35B-A3B-UD-Q2_K_XL-mlxUD-Q2_K_XL14 GB99.22.42x
Brooooooklyn/Qwen3.6-35B-A3B-UD-Q3_K_XL-mlxUD-Q3_K_XL18 GB83.62.04x
Brooooooklyn/Qwen3.6-35B-A3B-UD-Q4_K_XL-mlxUD-Q4_K_XL22 GB80.91.97x
Brooooooklyn/Qwen3.6-35B-A3B-UD-Q5_K_XL-mlxUD-Q5_K_XL26 GB73.81.80x
Brooooooklyn/Qwen3.6-35B-A3B-UD-Q6_K_XL-mlxUD-Q6_K_XL31 GB73.91.80x
Brooooooklyn/Qwen3.6-35B-A3B-UD-Q8_K_XL-mlxUD-Q8_K_XL36 GB73.01.78x

Benchmarked on Apple M3 Max 128GB via examples/lm.ts (Turn 4 steady-state decode).

Performance

ModelSizeDecode (tok/s)Speedup
BF16 (unquantized)66 GB41.0baseline
This model (UD-Q3_K_XL)18 GB83.62.04x faster

Decode is memory-bandwidth bound on Apple Silicon — fewer bytes per token directly translates to higher throughput. The MoE architecture activates only 8 of 256 experts per token (~3B active out of 35.9B total).

Per-Tensor Bit Assignments (N=3)

WeightBitsRationale
embed_tokens5-bitKLD ~0.15 — very low sensitivity
lm_head6-bitKLD ~0.05 — safest tensor
self_attn.q/k/v_proj5-bit + AWQKLD ~1.5–2.9, AWQ via layernorm
linear_attn.in_proj_qkv/z5-bit + AWQKLD ~2.9, AWQ via layernorm
self_attn.o_projbf16NOT AWQ-correctable
linear_attn.out_projbf16KLD ~6.0 — worst tensor
down_proj4-bit"Slightly more sensitive"
gate_proj, up_proj3-bitbase bits
Router gates8-bitMoE routing accuracy
GDN params (A_log, etc)bf16State-space dynamics

Quantization Strategy

Based on Unsloth Dynamic 2.0 per-tensor KLD analysis. Sensitive layers get higher bits with AWQ correction, while the bulk of FFN expert weights are aggressively quantized. imatrix AWQ pre-scaling amplifies important weight channels and fuses inverse scales into preceding layer norms (zero inference overhead).

AWQ-correctable projections (q/k/v, in_proj_qkv/z) are quantized at 5-bit via input_layernorm. Non-AWQ-correctable projections (o_proj, out_proj) are kept at bf16 — their inputs come from attention/GDN computation, not from a norm layer.

Architecture

ParameterValue
Total parameters35.9B (3B active per token)
Hidden size2,048
Layers40 (30 linear + 10 full attention)
Attention heads16 (2 KV heads, GQA 8:1)
Head dimension256
Experts256 per MoE layer, top-8 routing
Vocab size248,320
Max context262,144 tokens

Usage

import { loadSession } from '@mlx-node/lm';

const session = await loadSession('./Qwen3.6-35B-A3B-UD-Q3_K_XL-mlx');

for await (const event of session.sendStream('Explain the hybrid attention mechanism in Qwen3.6.', {
  config: { maxNewTokens: 2048, temperature: 0.6, reasoningEffort: 'low' },
})) {
  if (!event.done) process.stdout.write(event.text);
}

How It Was Made

mlx convert \
  -i Qwen3.6-35B-A3B \
  -o Qwen3.6-35B-A3B-UD-Q3_K_XL-mlx \
  -q --q-recipe unsloth \
  --imatrix-path imatrix_unsloth.gguf

Acknowledgments

License

Apache 2.0 (inherited from base model).

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes4
Downloads
📝

No reviews yet

Be the first to review Brooooooklyn/Qwen3.6-35B-A3B-UD-Q3_K_XL-mlx!

Model Info

ProviderBrooooooklyn
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes4
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor