Back to Models
Visit Website
0X
0xSero/Qwen3.6-28B-REAP
0xSero • generalPlease support my work: https://donate.sybilsolutions.ai
Qwen3.6-28B-REAP20-Opus-A3B
A 20%-expert-pruned + Opus-trace fine-tuned variant of Qwen/Qwen3.6-35B-A3B, produced via Cerebras REAP (Router-weighted Expert Activation Pruning, arXiv:2510.13999) followed by LoRA SFT on public Claude Opus reasoning traces.
Headline numbers
| Metric | Base Qwen3.6-35B-A3B | This model (20% REAP + Opus SFT) | Δ |
|---|---|---|---|
| MMLU (200-sample lite) | {{MMLU_BASE}} | {{MMLU_THIS}} | {{MMLU_DELTA}} |
| GSM8K (100-sample lite) | {{GSM_BASE}} | {{GSM_THIS}} | {{GSM_DELTA}} |
| HumanEval (50 parse-rate) | {{HE_BASE}} | {{HE_THIS}} | {{HE_DELTA}} |
| Structured JSON parse (20) | {{JSON_BASE}} | {{JSON_THIS}} | {{JSON_DELTA}} |
| Mermaid render (10) | {{MERM_BASE}} | {{MERM_THIS}} | {{MERM_DELTA}} |
| AdvBench refusal (32) | {{REFUSE_BASE}} | {{REFUSE_THIS}} | {{REFUSE_DELTA}} |
Architecture
- Base: Qwen3.6-35B-A3B (40 layers, 256 experts/layer, 8 routed + 1 shared active,
qwen3_5_moe) - After 20% REAP: 205 experts/layer kept, 51 experts/layer pruned → ~28B total params, still ~3B active
- Fine-tune: LoRA rank 32, α 64 on
q,k,v,o,gate,up,downprojections. bf16 weights after merge.
Pipeline
- Calibration merge — 5,000 stratified samples from:
/Users/sero/.../reap-expert-swap/dataset/calibration-20k.jsonl(general, coding, reasoning, etc.)0xSero/structured-outputs-calibration-v1(JSON / Mermaid / schema)
- REAP observation (this fork's Qwen3_5Moe-aware observer, multi-GPU layerwise on 8× A100-40GB): {{OBS_DURATION}}
- REAP prune @ 20% using
reapsaliency metric, renormalized router weights, seed 42. - Opus-trace SFT via LLaMA-Factory + DeepSpeed ZeRO-3 (8× A100). LoRA 2 epochs on
nohurry/Opus-4.6-Reasoning-3000x-filtered(2,326 reasoning trajectories with explicit<think>…</think>\nanswerstructure). - GGUF — bf16, Q8_0, Q6_K, Q5_K_M, Q4_K_M with imatrix from merged calibration.
Sidecar observations
REAP observation artifacts live in the separate dataset repo
0xSero/qwen3.6-35b-a3b-reap-observations.
Known limitations
- Refusal behavior follows the base model plus Opus SFT; no explicit abliteration was applied in this release. The model will refuse straight adversarial probes at roughly base-model rates.
- Reasoning quality on GSM8K-style problems depends on the
<think>chain-of-thought; short max-tokens limits hurt accuracy. - Structured-output calibration is oversampled vs. base mix (JSON/Mermaid experts preferentially retained).
License
Apache 2.0, inherited from base model. This checkpoint is a derivative work; please preserve attribution.
Citation
@misc{lasby2025reap,
title = {REAP: Router-weighted Expert Activation Pruning for Mixture-of-Experts},
author = {Lasby, Mike and others},
year = {2025},
eprint = {2510.13999},
archivePrefix = {arXiv},
}