majentik/Qwen3.6-35B-A3B-TurboQuant
majentik • imageQwen3.6-35B-A3B-TurboQuant
TurboQuant KV cache compression for Qwen/Qwen3.6-35B-A3B.
This is a documentation repository that explains how to combine Qwen3.6-35B-A3B's weights with TurboQuant inference-time KV cache compression. No weights are stored here — use the base model directly and apply TurboQuant via the Python package or llama.cpp fork.
Hardware compatibility
| Device | VRAM / RAM | Recommendation |
|---|---|---|
| Any host that runs the base model | baseline + runtime savings | RotorQuant/TurboQuant is a KV-cache runtime modifier; pair with any weight variant |
What is this?
KV cache compression reduces the memory used by the attention cache during inference. Unlike weight quantization (which is baked into the GGUF/MLX file), KV cache compression is applied at runtime — so the same base weights can be used with or without compression.
| Technique | Where it's applied | Savings |
|---|---|---|
| Weight quantization (GGUF/MLX/AWQ) | Baked into model file | Reduces disk + weight memory |
| TurboQuant KV cache | At inference time | Reduces attention memory (critical for long context) |
Both can be combined for maximum efficiency.
Quickstart
Option A — Python / transformers
Install the turboquant package:
pip install turboquant
Then use it with the base model:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from turboquant import TurboQuantCache
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3.6-35B-A3B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen3.6-35B-A3B",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
# Apply TurboQuant to the KV cache
cache = TurboQuantCache(bits=4) # or bits=2 for more aggressive compression
inputs = tokenizer("Hello, how are you?", return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=128,
past_key_values=cache,
use_cache=True,
)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
Option B — llama.cpp / LM Studio / Ollama (with fork)
TurboQuant KV cache types (planar3) are not in upstream llama.cpp. They require:
Once built:
llama-cli -m Qwen3.6-35B-A3B.gguf \
--cache-type-k planar3 --cache-type-v planar3 \
-ngl 99 -fa \
-p "Hello"
For standard runtimes (LM Studio, Ollama, upstream llama.cpp), use conventional KV cache types (q8_0, q4_0). You lose the TurboQuant-specific benefits but keep GGUF weight quantization.
Model Specifications
| Property | Value |
|---|---|
| Base Model | Qwen/Qwen3.6-35B-A3B |
| Architecture | Hybrid MoE (256 experts, 8 active), instruct-tuned |
| Parameters | 35B total, 3B active (MoE) |
| Context Length | 262K native |
| BF16 Size | ~70 GB (approx.) |
| Modalities | Text + Image + Video (multimodal) |
| License | apache-2.0 |
What is TurboQuant?
TurboQuant (ICLR 2026) applies random orthogonal rotations followed by optimal scalar quantization to the KV cache. Bit-identical prefill logits at 4-bit, up to 4-8× memory savings for long sequences.
Benchmarks (from the TurboQuant repository, Llama 3.1 8B on RTX 5090 — results vary by model and hardware):
- 4-bit KV cache: bit-identical prefill logits
- ~1.4-1.7× speedup on Apple Silicon
- Up to 8× KV memory savings
Benchmarks are from the TurboQuant repository using Llama 3.1 8B. Performance on Qwen3.6-35B-A3B will differ. Please open a discussion if you have independent results.
Current Ecosystem Support
| Runtime | TurboQuant Support | Notes |
|---|---|---|
Python transformers + turboquant | ✅ Full | Drop-in cache class |
| llama.cpp upstream | ❌ Not merged | Use fork below |
| llama-cpp-turboquant fork | ✅ planar3, iso3 | GitHub |
| LM Studio | ❌ Requested | Use q8_0 as alternative |
| Ollama | ❌ Not supported | Use OLLAMA_KV_CACHE_TYPE=q8_0 |
| vLLM | ❌ Not supported | — |
| koboldcpp | ❌ Not supported | — |
Pre-quantized weight variants
If you want combined weight + KV cache compression, majentik hosts pre-quantized versions: