Back to Models
AEON-7 logo

AEON-7/Qwen3.6-35B-A3B-heretic-NVFP4

AEON-7 • image

Qwen3.6-35B-A3B-heretic — NVFP4 (v2 multimodal-preserved)

šŸš€ PRODUCTION DEPLOYMENT GUIDE: github.com/AEON-7/Qwen3.6-NVFP4-DFlash

The GitHub repo is the definitive turn-key setup for DGX Spark — pre-built Docker image, end-to-end deployment guide, validated OpenClaw config, the 8 vLLM patches that actually make this work on SM121, and a concurrency-sweep benchmark harness.

  • Image: ghcr.io/aeon-7/vllm-spark-omni-q36:v1.2 (vLLM HEAD source-built for cu130/sm_120 + 5 source patches + flashinfer 0.6.8 + Marlin GEMM enforcement)
  • Pairs with z-lab/Qwen3.6-35B-A3B-DFlash drafter (must be post 2026-04-19 revision)
  • Production-stable under sustained chat load — measured 116.8 tok/s single-stream / 785.3 tok/s aggregate at 128-concurrency on DGX Spark

What changed in v2 (2026-04-19)

v1 of this checkpoint had model.language_model.layers.X.* keys remapped to model.layers.X.* so vLLM's text-only Qwen3_5MoeForCausalLM loader would pick them up. That layout was unstable in production — intermittent NaN/crash in the prefix-strip codepath during real chat sessions.

v2 re-quantizes the same source (tvall43/Qwen3.6-35B-A3B-heretic) with AutoModelForImageTextToText, preserving the canonical multimodal layout:

  • Architecture: Qwen3_5MoeForConditionalGeneration (vLLM's canonical class — no registry hack required)
  • Keys: model.language_model.layers.X.* retained natively (no post-quantization key rewriting)
  • 27-block ViT vision encoder preserved BF16
  • 30 linear-attention (Mamba/GDN) layers preserved BF16
  • All 122,880 per-expert NVFP4 keys (40 layers Ɨ 256 experts Ɨ 3 projections Ɨ 4 quant components)

vLLM serves it via the canonical multimodal class with no prefix-strip code path in the inference hot loop. Result: rock-solid stability where v1 was crashing on virtually every interaction.

āš ļø If you cloned v1 of this repo, delete and re-pull. Same URL — v2 commits replaced v1.


NVFP4-quantized version of tvall43/Qwen3.6-35B-A3B-heretic — an abliterated (decensored, 5/100 refusal rate) Qwen 3.6 35B-A3B Mixture-of-Experts multimodal model with thinking/reasoning capabilities.

Quantized using llmcompressor with the compressed-tensors nvfp4-pack-quantized format. Calibrated with 256 samples from open-platypus over 40 sequential decoder-layer stages. Vision encoder, linear-attention (Mamba/GDN) layers, MoE routers, gates, norms, and lm_head/embed_tokens preserved in BF16.

Designed for deployment on NVIDIA DGX Spark (GB10, Blackwell SM 12.0+) with native FP4 tensor-core support. Pairs with z-lab/Qwen3.6-35B-A3B-DFlash for spec-decode acceptance of 2.7-4.4 mean accepted tokens per target step on greedy workloads.


Performance Benchmarks

Test Setup

Hardware: NVIDIA DGX Spark (GB10, SM 12.1, 128 GB unified memory) Software: vLLM HEAD source-built (image ghcr.io/aeon-7/vllm-spark-omni-q36:v1.2), flashinfer 0.6.8, DFlash speculative decoding with num_speculative_tokens=15, BF16 KV cache, --gpu-memory-utilization 0.85

Compute paths (verified live):

  • Linear NVFP4 GEMM (q/k/v/o, mlp — 10 attention layers): FlashInferCutlassNvFp4LinearKernel — native FP4 tensor cores, autotuned at boot
  • MoE NVFP4 GEMM (256 experts Ɨ 40 layers): MARLIN — weight-only decompress to BF16. Tested 2026-04-21: every other backend (FLASHINFER_TRTLLM, FLASHINFER_CUTEDSL{,_BATCHED}, FLASHINFER_CUTLASS, VLLM_CUTLASS) rejects this 256-expert Ɨ 512-intermediate shape in is_supported_config() — auto-selector arrives at MARLIN as the only supported MoE backend

Bench config (production): --max-num-seqs 128, --max-model-len 262144, --max-num-batched-tokens 65536. Single config — unlike other models with separate "single-stream" vs "throughput" configs, Qwen3.6 ships one production config that handles both well.

Methodology: All tests run enable_thinking=false for clean decode-rate measurement (production with thinking on adds reasoning-token overhead but does not change throughput). Greedy sampling (T=0) unless explicitly noted stochastic. SSE streaming. Median across N runs. Mixed-domain prompt set (code, math, QA, reasoning). Zero errors across 1,200+ requests in the full test.

āš ļø DFlash speedup is workload-dependent. Per-prompt decode rate ranges from 41 to 127 tok/s in the single-stream test, depending on how predictable the drafter finds the target's output. Greedy reasoning workloads (math, code) hit the upper end (78%+ acceptance). Creative / sampled workloads are more variable.


1. Single-Stream Performance

Best for interactive chat and agentic UX. All measurements greedy (T=0) unless noted.

Decode rate (10 trials, 200-token outputs)

Statistictok/s
Median83.9
p95127.5
Min41.1
Max127.5

Variance reflects DFlash acceptance differences across prompt classes — math/code prompts hit ~125 tok/s with high drafter agreement, more open-ended prompts settle around 60-90 tok/s.

TTFT by prompt length (5 trials per class)

Prompt classApprox. input tokensTTFT p50TTFT p95TTFT minEffective prefill
Tiny299 ms102 ms98 ms20 tok/s
Short7114 ms115 ms110 ms62 tok/s
Medium50123 ms128 ms121 ms407 tok/s
Long465259 ms314 ms257 ms1,797 tok/s

Sub-130ms TTFT for any prompt under ~50 tokens — fixed kernel-launch overhead dominates short prefill.

Decode rate by output length (3 trials per length)

Max tokensActual tokens (median)TTFTDecode rateTotal latency
5050113 ms70.1 tok/s0.82 s
200200112 ms88.4 tok/s2.37 s
500331*116 ms115.6 tok/s4.44 s
1000330*113 ms118.3 tok/s6.28 s

* model emitted EOS naturally before hitting max_tokens.

Decode rate increases with output length — DFlash steady-state amortization improves over the first 100-200 tokens once the drafter and target lock into a stable acceptance pattern.

Sampling: greedy vs stochastic (5 trials per mode)

ModeDecode p50Decode p95TTFT p50
Greedy (T=0)76.5 tok/s123.0 tok/s115 ms
Stochastic (T=0.7)64.8 tok/s125.4 tok/s113 ms

15% degradation T=0 → T=0.7. Less dramatic than typical for spec-decode systems — DFlash's drafter remains useful even at moderate sampling. Use T=0 for max DFlash speedup; T=0.7 for diversity.

Long-prompt prefill (RAG / document workloads)

Input tokensTTFT (ā‰ˆ prefill)Prefill rateDecode rate after prefill
1K519 ms1,973 tok/s48.8 tok/s
4K2,594 ms1,579 tok/s41.1 tok/s
16K8,007 ms2,046 tok/s34.6 tok/s
32K19,368 ms1,692 tok/s23.0 tok/s

Prefill rate plateaus around 2K tok/s due to (a) the drafter prefilling the same context in parallel and (b) Qwen3.6's 30 linear-attention (Mamba/GDN) layers having higher prefill constant factor than parallel softmax attention. Decode-after-prefill drops gracefully (~50% from 1K → 32K context).

Single-stream summary

MetricValue
Single-stream decode (200-tok output)83.9 tok/s median
Decode @ 500-1000 tok output (DFlash steady state)115-118 tok/s
Short-prompt TTFT99-128 ms
16K-prompt TTFT8.0 s
32K-prompt TTFT19.4 s
Peak prefill throughput~2,046 tok/s @ 16K prompt
Decode rate with 32K context23.0 tok/s (53% drop vs short context)

2. Concurrent-Session Performance

Best for agent fleets and multi-user serving. 3 trials per level, median run reported (sorted by aggregate throughput). Mixed prompts, 200-token output, T=0.7 (stochastic — production-realistic), SSE streaming.

Throughput scaling (N concurrent clients, 200-tok output)

ConcurrentErrorsAgg tok/s (median of 3)Per-req decode p50Per-req decode minTTFT p50TTFT p95
10102.9109.1109.1111 ms111 ms
20131.394.068.9144 ms144 ms
40128.148.538.9191 ms191 ms
80163.329.214.2355 ms356 ms
160227.619.38.6501 ms503 ms
320275.511.65.2701 ms703 ms
640310.86.93.31.07 s11.2 s
1280313.66.53.014.1 s46.7 s

Zero errors across all 384 requests in the concurrent sweep (3 runs Ɨ 128-conc top level alone = 384, plus all lower levels = 1,200+ total).

Aggregate throughput plateaus at ~313 tok/s from 64 concurrent onward — that's the GPU's compute wall on this 35B-active-3B MoE with linear-attention KV reads + DFlash drafter overhead. TTFT spikes severely at 128 concurrent (14s p50, 47s p95) because all 128 sequences fit in the scheduler but compute is fully saturated, so each token's worth of work is divided across 128 streams. For latency-sensitive UX, target 16-32 concurrent; for max throughput, use the full 128.

TTFT-only scaling (1-token output, prefill + first-token)

Measures pure scheduler queue contention — critical for agent UX:

ConcurrentTTFT p50TTFT p95TTFT minTTFT max
174 ms75 ms72 ms75 ms
499 ms100 ms97 ms100 ms
16249 ms263 ms238 ms263 ms
64560 ms698 ms451 ms707 ms

TTFT stays sub-700ms through 64 concurrent — smooth UX for small agent fleets. Beyond 64, TTFT accumulates queue-wait time as compute is fully consumed.

Concurrent with 1K-token prompts (RAG-style workload)

50-token output with 1,024-token prompts — simulates agents doing document QA or retrieval-augmented responses. Median of 2 runs.

ConcurrentErrorsAgg tok/sTTFT p50TTFT p95Decode p50
1023.1494 ms494 ms44.1
4039.51,673 ms1,720 ms24.6
16047.16,179 ms6,180 ms10.6
64049.819,297 ms33,352 ms2.5

RAG throughput peaks around 50 tok/s at 16-64 concurrent. The aggregate is lower than the short-prompt sweep because each request spends most of its wall-clock in prefill (1K tokens) rather than decode. Use prefix caching if your RAG workload has repeated context blocks — the production compose enables --enable-prefix-caching which can give 5-10Ɨ speedup on shared-prefix RAG.

Concurrent-session summary

MetricValue
Peak aggregate throughput313.6 tok/s @ 128 concurrent (median of 3 trials)
Scaling from 1 → 1283.05Ɨ throughput (compute-bound — DFlash + 35B MoE saturates GB10 around 64 streams)
Per-request decode @ 1286.5 tok/s p50, 3.0 min
TTFT @ 64 concurrent1.07 s p50 (acceptable for agent fleets)
TTFT @ 128 concurrent14.1 s p50 (queue-bound — useful for batch only)
Error rate across full bench0.0% (1,200+ requests, conc 1 → 128)
Best concurrency for chat UX4-16 (per-req 19-48 tok/s, TTFT < 500 ms)
Best concurrency for max throughput64-128 (saturated compute, TTFT trade-off)

Key Performance Metrics Summary

MetricValue
Single-stream decode (200-tok output)83.9 tok/s median
Single-stream decode @ DFlash steady state118 tok/s (1000-tok output)
Short-prompt TTFT99-128 ms
Peak aggregate throughput313.6 tok/s @ 128 concurrent
TTFT @ 16 concurrent (smooth UX)501 ms p50
TTFT @ 64 concurrent (still usable)1.07 s p50
Greedy vs stochastic decode penalty15% (76.5 → 64.8 tok/s)
DFlash position-0 acceptance (greedy workloads)62-78%
Mean accepted tokens per target step2.7-4.4
Long-context decode @ 32K prompt23.0 tok/s
Total bench wall-clock11 minutes (1,200+ requests, 0 errors)

Scaling efficiency (200-tok concurrent test)

ConcurrencyThroughput gain vs 1-req
11.0Ɨ
41.2Ɨ
162.2Ɨ
643.0Ɨ
1283.05Ɨ

Scaling is GPU-compute-bound rather than memory-bound — DFlash on a 35B MoE with hybrid linear+full attention saturates the GB10's compute around 64 concurrent. Per-request throughput degrades from 109 tok/s (1-req) to 6.5 tok/s (128-req). For comparison, a non-spec-decode setup would scale much more linearly but lose the ~2-4Ɨ single-stream speedup DFlash provides.


Test methodology notes

  • enable_thinking=false — bench disables Qwen3.6's thinking tag for clean decode-rate measurement. Production with thinking on adds reasoning-token overhead before content emission (use max_tokens ≄ 2048 for thinking-enabled requests).
  • DFlash speedup is workload-dependent — math, code, agentic, and reasoning workloads at T=0 hit the highest acceptance rates. Creative writing or open-ended chat sees lower acceptance.
  • Mixed-prompt set in concurrent tests: code, math, QA, creative writing, single-line answers — to avoid biasing toward DFlash-friendly prompts.
  • 3 trials per concurrency level for the throughput sweep, median run (by aggregate tok/s) reported. RAG section uses 2 trials.
  • 200-token output as the standard test length (except TTFT-only test which uses 1 token, RAG which uses 50, and decode-by-output which sweeps 50→1000).
  • Error tracking: 0/1,200+ requests failed across the full test (all sections combined).
  • Reproducible: bench script at scripts/bench_full.py; raw JSON results at bench/qwen36_v2_2026-04-20.json.

āš ļø IMPORTANT REQUIREMENTS

#RequirementWhy
1Native Blackwell GPU (SM 10.0+ — B200, GB10, RTX PRO 6000 Blackwell, RTX 5090)NVFP4 needs hardware FP4 tensor cores
2vLLM with sm_120 NVFP4 kernels — use ghcr.io/aeon-7/vllm-spark-omni-q36:v1.2 (or build from Qwen3.6-NVFP4-DFlash)Stock vLLM wheels don't compile FP4 kernels for SM 12.x; the SM121 SMEM workarounds aren't upstream yet
3--quantization compressed-tensors (NOT modelopt)This checkpoint uses llmcompressor's compressed-tensors NVFP4 format
4--trust-remote-codeQwen3.6 ships custom modeling code
5--attention-backend flash_attn (when using DFlash)DFlash spec decode requires flash_attn backend
6VLLM_TEST_FORCE_FP8_MARLIN=1 env (defensive pin)Pins NVFP4 MoE backend to MARLIN. As of 2026-04-21, vLLM's auto-selector arrives at MARLIN anyway because every other MoE backend rejects our 256-expert Ɨ 512-intermediate shape — this env defends against future vLLM versions adding a half-broken backend. The linear NVFP4 path is unaffected and uses native FP4 tensor cores (FlashInferCutlassNvFp4Linear).
7DFlash drafter from post 2026-04-19 revisionEarlier z-lab drafter had a long-context crash bug
8Latest transformers (≄5.5.4)qwen3_5_moe model_type registration

Quick Start (DGX Spark, with DFlash spec decode)

# 1. Pull the image (anonymous public GHCR pull — anyone can run this)
docker pull ghcr.io/aeon-7/vllm-spark-omni-q36:v1.2

# 2. Pull both models
sudo mkdir -p /opt/qwen36 && sudo chown $USER:$USER /opt/qwen36
cd /opt/qwen36
export HF_HUB_ENABLE_HF_TRANSFER=1
hf download AEON-7/Qwen3.6-35B-A3B-heretic-NVFP4 --local-dir ./qwen36-nvfp4 &
hf download z-lab/Qwen3.6-35B-A3B-DFlash         --local-dir ./qwen36-dflash &
wait

# 3. Get the production compose file
curl -fsSL \
  https://raw.githubusercontent.com/AEON-7/Qwen3.6-NVFP4-DFlash/main/examples/docker-compose.yml \
  -o docker-compose.yml

# 4. Start
docker compose up -d
docker compose logs -f   # wait for "Application startup complete" (~3-5 min)

# 5. Test (use temperature=0 + ≄2048 max_tokens for thinking-enabled requests)
curl http://localhost:8000/v1/chat/completions \
  -H 'Content-Type: application/json' \
  -d '{
    "model": "qwen36-fast",
    "messages": [{"role":"user","content":"What is 17 Ɨ 23? Show your work."}],
    "max_tokens": 2048,
    "temperature": 0
  }'

Full step-by-step (with pre-flight checks, smoke tests, systemd service, OpenClaw integration): github.com/AEON-7/Qwen3.6-NVFP4-DFlash/blob/main/docs/dgx-spark-setup.md

Production docker-compose (the actual flags that work)

services:
  vllm:
    image: ghcr.io/aeon-7/vllm-spark-omni-q36:v1.2
    container_name: vllm-qwen36-heretic
    restart: unless-stopped
    network_mode: host
    environment:
      - VLLM_ALLOW_LONG_MAX_MODEL_LEN=1
      - TORCH_MATMUL_PRECISION=high
      - PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
      - NVIDIA_FORWARD_COMPAT=1
      - VLLM_TEST_FORCE_FP8_MARLIN=1     # MANDATORY on DGX Spark / SM121
    volumes:
      - /opt/qwen36/qwen36-nvfp4:/models/qwen36
      - /opt/qwen36/qwen36-dflash:/models/qwen36-dflash
    command:
      - bash
      - -c
      - |
        exec vllm serve /models/qwen36 \
          --served-model-name qwen36-35b-heretic qwen36-fast qwen36-deep \
          --host 0.0.0.0 --port 8000 \
          --tensor-parallel-size 1 \
          --dtype auto \
          --quantization compressed-tensors \
          --max-model-len 262144 \
          --max-num-seqs 128 \
          --max-num-batched-tokens 65536 \
          --gpu-memory-utilization 0.85 \
          --enable-chunked-prefill \
          --enable-prefix-caching \
          --load-format safetensors \
          --trust-remote-code \
          --enable-auto-tool-choice \
          --tool-call-parser qwen3_coder \
          --reasoning-parser qwen3 \
          --speculative-config '{"method":"dflash","model":"/models/qwen36-dflash","num_speculative_tokens":15}' \
          --attention-backend flash_attn
    ipc: host
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]

Note: --enforce-eager is NOT required with the v1.2 image + the post-2026-04-19 DFlash drafter. Earlier writeups recommended it as a workaround for two separate bugs (drafter long-context crash + cudagraph capture-size misalignment). Both are now fixed — the drafter on HF, and the cudagraph alignment via the v1.2 image's patch_cudagraph_align.py. Running with cudagraphs enabled gives ~30% throughput over eager mode.


What's inside the v1.2 image (the 8 modifications)

#ModificationWhat it solves
1register_qwen3_5_text.pyAdds text-only registry entries (used by v1 weights only — harmless on v2)
2patch_cuda_optional_import.pyWraps _C_stable_libtorch import in RTLD_LAZY so SM100-only MXFP4 symbols don't break sm_120
3patch_kv_cache_utils.py (Ɨ4)Defaults mamba_block_size = cache_config.block_size or 16 for hybrid attention layers
4patch_mrope_text_fallback.pyInline M-RoPE fallback (T=H=W=arange) — neither Qwen3.6 class implements get_mrope_input_positions upstream
5patch_cudagraph_align.pyRemoves the FULL-only gate on cudagraph capture-size alignment so PIECEWISE+spec-decode doesn't hit cudaErrorIllegalAddress
6VLLM_TEST_FORCE_FP8_MARLIN=1 (env, baked default)Defensive pin on the NVFP4 MoE backend (only MARLIN supports our 256Ɨ512 expert shape; other backends reject in is_supported_config()). Linear NVFP4 path uses CUTLASS natively, unaffected.
7TORCH_CUDA_ARCH_LIST="12.0+PTX" (build)sm_120 build with PTX → driver JITs to sm_121a on Spark
8flashinfer-python>=0.6.8sm_120 NVFP4 KV-cache decode kernels

Full per-patch breakdown with upstream-issue references: github.com/AEON-7/Qwen3.6-NVFP4-DFlash/blob/main/docs/patches.md


Model Architecture

PropertyValue
Architectureqwen3_5_moe (multimodal — Qwen3_5MoeForConditionalGeneration)
Total params~35B
Active params~3B / token
Layers40 (3Ɨ Gated DeltaNet + 1Ɨ Gated Attention, repeating Ɨ10)
Hidden2048
Experts256 routed + 1 shared, top-8 per token
Vocabulary248,320
Native context262,144 (256K)
Extended context (YaRN)1,010,000 (1M+)
Multimodal27-block ViT vision encoder (preserved BF16)

Hybrid Attention

Attention typeLayersQ/K/V headsHead dim
Gated DeltaNet (linear, BF16)30 (3 of every 4)QK 16, V 32128
Gated Attention (NVFP4)10 (1 of every 4)Q 16, KV 2256 (rotary 64)

Quantization Details

ParameterValue
Toolllmcompressor
Formatcompressed-tensors nvfp4-pack-quantized
SchemeNVFP4 (FP4 E2M1 + per-block FP8 e4m3 scales + per-tensor FP32 scales)
Block size16
Calibration dataopen-platypus (256 samples)
Calibration seq_len2048
PipelineSequential (Qwen3_5MoeDecoderLayer, layer-by-layer to GPU)
HardwareNVIDIA RTX PRO 6000 Blackwell (96 GB)
Calibration wall-clock~3 hours (40 decoder layers Ɨ ~3-4 min each)
Output9 safetensors shards, ~22 GB total
Expert keys (NVFP4)122,880 (40 Ɨ 256 Ɨ 3 Ɨ 4)
Visual keys (BF16)~333
Linear-attn keys (BF16)~270

Quantized layers (NVFP4)

  • Gated Attention projections: q_proj, k_proj, v_proj, o_proj (10 layers)
  • MoE experts (256 Ɨ 40 layers = 10,240 expert modules): gate_proj, up_proj, down_proj
  • Shared expert: same projections

Excluded from quantization (kept BF16)

  • lm_head, embed_tokens — accuracy-critical token projections
  • *.mlp.gate, *.shared_expert_gate — MoE routing (sparsity-critical)
  • *.norm.* — all RMSNorm layers
  • *.visual.* — 27-block ViT vision tower
  • *.linear_attn.* — 30 Gated DeltaNet (Mamba) layers (small relative to MoE; quantizing them tanks accuracy)

The exact recipe + script that produced this checkpoint is at scripts/qwen36_requant_v2.py.


Recommended sampling parameters

From the Qwen3.6 model card:

ModeGeneralCodingMath/Reasoning
ThinkingT=1.0, P=0.95, K=20, PP=1.5T=0.6, P=0.95, K=20, PP=0.0T=1.0, P=1.0, K=40, PP=2.0
Instruct (no think)T=0.7, P=0.8, K=20, PP=1.5—T=1.0, P=0.95, K=20, PP=1.5

For maximum DFlash speedup: use T=0 (greedy). The drafter ↔ target agreement rate collapses with sampling — at T=0.7 you typically see ~10-20% acceptance vs. 60-78% at T=0.

The production compose registers 3 served-model aliases for the same backend so chat clients can route greedy vs sampled requests separately:

  • qwen36-fast → intended for greedy/agentic (T=0)
  • qwen36-deep → intended for creative/sampled (T=0.7)
  • qwen36-35b-heretic → canonical name

Disable thinking per-request:

{"chat_template_kwargs": {"enable_thinking": false}}

Preserve thinking across multi-turn:

{"chat_template_kwargs": {"preserve_thinking": true}}

Common gotcha: with thinking enabled (default), Qwen3.6 spends most of its max_tokens budget on <think> reasoning before emitting content. Use max_tokens ≄ 2048 for thinking-enabled requests — lower budgets often produce content: null with finish_reason: "length".


Hardware Requirements

TierGPUNotes
Target — production-validatedNVIDIA DGX Spark (128 GB unified, GB10 SM 12.1)Full 256K context, 128 concurrent streams, this image
CompatibleRTX PRO 6000 Blackwell (96 GB)What v2 was calibrated on. Same MoE-shape constraint applies (Marlin is still the only NVFP4 MoE backend that accepts 256Ɨ512 grouped GEMM regardless of chip); linear NVFP4 path uses CUTLASS native.
CompatibleB200 / GB200Image rebuild required (SM 10.0, not SM 12.x)
CompatibleRTX 5090 (32 GB)Reduced context, low concurrency
MinimumAny Blackwell GPU (SM 10.0+)Required for native FP4; sub-Blackwell can run via Marlin W4A16 fallback but with reduced throughput

Files

FileSizeDescription
model-00001-of-00009.safetensors … model-00009-of-00009.safetensors~22 GB totalNVFP4 quantized weights (~123,724 tensors across 9 shards)
model.safetensors.index.json~5 MBshard index
config.json~7 KBModel + quantization config (Qwen3_5MoeForConditionalGeneration)
tokenizer.json~20 MBQwen tokenizer (248K vocab)
tokenizer_config.json~1 KB
chat_template.jinja~8 KBQwen3.6 chat template (thinking + tool calling)
preprocessor_config.json~500 BImage preprocessor (kept for multimodal compat)
generation_config.json~213 B
recipe.yaml~500 Bllmcompressor recipe used

Disclaimer

THIS IS AN UNCENSORED MODEL. By downloading, accessing, or using this model you expressly assume full and sole responsibility for all outputs generated, all actions taken based on outputs, and compliance with applicable laws. The authors are not responsible for any harmful, illegal, or objectionable content. These tools serve legitimate purposes including security research, red-teaming, content analysis, and creative work. Implement safeguards appropriate to your use case and jurisdiction.

License

Apache 2.0 (inherited from Qwen3.6 base).

Credits


ā˜• Support the work

If this release has been useful, tips are deeply appreciated — they go directly toward more compute, more models, and more open releases.

₿ Bitcoin (BTC)
BTC QR
bc1q09xmzn00q4z3c5raene0f3pzn9d9pvawfm0py4
Īž Ethereum (ETH)
ETH QR
0x1512667F6D61454ad531d2E45C0a5d1fd82D0500
ā—Ž Solana (SOL)
SOL QR
DgQsjHdAnT5PNLQTNpJdpLS3tYGpVcsHQCkpoiAKsw8t
ā“œ Monero (XMR)
XMR QR
836XrSKw4R76vNi3QPJ5Fa9ugcyvE2cWmKSPv3AhpTNNKvqP8v5ba9JRL4Vh7UnFNjDz3E2GXZDVVenu3rkZaNdUFhjAvgd

Ethereum L2s (Base, Arbitrum, Optimism, Polygon, etc.) and EVM-compatible tokens can be sent to the same Ethereum address.

Visit Website
—

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes28
Downloads—
šŸ“

No reviews yet

Be the first to review AEON-7/Qwen3.6-35B-A3B-heretic-NVFP4!

Model Info

ProviderAEON-7
Categoryimage
Reviews0
Avg. Rating— / 5.0

Community

Likes28
Downloads—

Rating Guidelines

ā˜…ā˜…ā˜…ā˜…ā˜…Exceptional
ā˜…ā˜…ā˜…ā˜…Great
ā˜…ā˜…ā˜…Good
ā˜…ā˜…Fair
ā˜…Poor