Back to Models
MA

majentik/Qwen3.6-35B-A3B-TurboQuant

majentikimage

Qwen3.6-35B-A3B-TurboQuant

TurboQuant KV cache compression for Qwen/Qwen3.6-35B-A3B.

This is a documentation repository that explains how to combine Qwen3.6-35B-A3B's weights with TurboQuant inference-time KV cache compression. No weights are stored here — use the base model directly and apply TurboQuant via the Python package or llama.cpp fork.

Hardware compatibility

DeviceVRAM / RAMRecommendation
Any host that runs the base modelbaseline + runtime savingsRotorQuant/TurboQuant is a KV-cache runtime modifier; pair with any weight variant

What is this?

KV cache compression reduces the memory used by the attention cache during inference. Unlike weight quantization (which is baked into the GGUF/MLX file), KV cache compression is applied at runtime — so the same base weights can be used with or without compression.

TechniqueWhere it's appliedSavings
Weight quantization (GGUF/MLX/AWQ)Baked into model fileReduces disk + weight memory
TurboQuant KV cacheAt inference timeReduces attention memory (critical for long context)

Both can be combined for maximum efficiency.

Quickstart

Option A — Python / transformers

Install the turboquant package:

pip install turboquant

Then use it with the base model:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from turboquant import TurboQuantCache

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3.6-35B-A3B", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen3.6-35B-A3B",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

# Apply TurboQuant to the KV cache
cache = TurboQuantCache(bits=4)  # or bits=2 for more aggressive compression

inputs = tokenizer("Hello, how are you?", return_tensors="pt").to(model.device)
outputs = model.generate(
    **inputs,
    max_new_tokens=128,
    past_key_values=cache,
    use_cache=True,
)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))

Option B — llama.cpp / LM Studio / Ollama (with fork)

TurboQuant KV cache types (planar3) are not in upstream llama.cpp. They require:

Once built:

llama-cli -m Qwen3.6-35B-A3B.gguf \
  --cache-type-k planar3 --cache-type-v planar3 \
  -ngl 99 -fa \
  -p "Hello"

For standard runtimes (LM Studio, Ollama, upstream llama.cpp), use conventional KV cache types (q8_0, q4_0). You lose the TurboQuant-specific benefits but keep GGUF weight quantization.

Model Specifications

PropertyValue
Base ModelQwen/Qwen3.6-35B-A3B
ArchitectureHybrid MoE (256 experts, 8 active), instruct-tuned
Parameters35B total, 3B active (MoE)
Context Length262K native
BF16 Size~70 GB (approx.)
ModalitiesText + Image + Video (multimodal)
Licenseapache-2.0

What is TurboQuant?

TurboQuant (ICLR 2026) applies random orthogonal rotations followed by optimal scalar quantization to the KV cache. Bit-identical prefill logits at 4-bit, up to 4-8× memory savings for long sequences.

Benchmarks (from the TurboQuant repository, Llama 3.1 8B on RTX 5090 — results vary by model and hardware):

  • 4-bit KV cache: bit-identical prefill logits
  • ~1.4-1.7× speedup on Apple Silicon
  • Up to 8× KV memory savings

Benchmarks are from the TurboQuant repository using Llama 3.1 8B. Performance on Qwen3.6-35B-A3B will differ. Please open a discussion if you have independent results.

Current Ecosystem Support

RuntimeTurboQuant SupportNotes
Python transformers + turboquant✅ FullDrop-in cache class
llama.cpp upstream❌ Not mergedUse fork below
llama-cpp-turboquant forkplanar3, iso3GitHub
LM StudioRequestedUse q8_0 as alternative
Ollama❌ Not supportedUse OLLAMA_KV_CACHE_TYPE=q8_0
vLLM❌ Not supported
koboldcpp❌ Not supported

Pre-quantized weight variants

If you want combined weight + KV cache compression, majentik hosts pre-quantized versions:

See Also

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes7
Downloads
📝

No reviews yet

Be the first to review majentik/Qwen3.6-35B-A3B-TurboQuant!

Model Info

Providermajentik
Categoryimage
Reviews0
Avg. Rating / 5.0

Community

Likes7
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor