Back to Models
z-lab logo

z-lab/Qwen3.5-9B-DFlash

z-labcode

Qwen3.5-9B-DFlash

Paper | GitHub | Blog

DFlash is a speculative decoding method that uses a lightweight block diffusion model to draft multiple tokens in parallel, achieving up to 4.4x speedup over autoregressive decoding. This is the drafter model, which must be paired with Qwen/Qwen3.5-9B.

DFlash Architecture

Quick Start

Installation

vLLM:

uv pip install vllm
uv pip install -U vllm --torch-backend=auto --extra-index-url https://wheels.vllm.ai/nightly

SGLang:

uv pip install "git+https://github.com/sgl-project/sglang.git@refs/pull/20547/head#subdirectory=python"

Launch Server

vLLM:

vllm serve Qwen/Qwen3.5-9B \
  --speculative-config '{"method": "dflash", "model": "z-lab/Qwen3.5-9B-DFlash", "num_speculative_tokens": 15}' \
  --attention-backend flash_attn \
  --max-num-batched-tokens 32768

SGLang:

# Optional: enable schedule overlapping (experimental, may not be stable)
# export SGLANG_ENABLE_SPEC_V2=1
# export SGLANG_ENABLE_DFLASH_SPEC_V2=1
# export SGLANG_ENABLE_OVERLAP_PLAN_STREAM=1

python -m sglang.launch_server \
    --model-path Qwen/Qwen3.5-9B \
    --speculative-algorithm DFLASH \
    --speculative-draft-model-path z-lab/Qwen3.5-9B-DFlash \
    --speculative-num-draft-tokens 16 \
    --tp-size 1 \
    --attention-backend fa3 \
    --mem-fraction-static 0.75 \
    --mamba-scheduler-strategy extra_buffer \
    --trust-remote-code

Tip: For long-context or agentic workloads, add --speculative-dflash-draft-window-size WINDOW_SIZE to enable sliding-window attention for the drafter.

Usage

from openai import OpenAI

client = OpenAI(base_url="http://localhost:30000/v1", api_key="EMPTY")

response = client.chat.completions.create(
    model="Qwen/Qwen3.5-9B",
    messages=[{"role": "user", "content": "Write a quicksort in Python."}],
    max_tokens=4096,
    temperature=0.0
)
print(response.choices[0].message.content)

Benchmark Results

Setup: Single NVIDIA B200, SGLang, thinking enabled, max output length 4096. We report end-to-end throughput, including prefill time. See our GitHub repository for reproduction scripts.

Throughput and Speedup

DFlash outperforms MTP across all block sizes and concurrency levels, achieving up to 4.4x speedup at concurrency 1.

Tokens/sec (speedup vs. autoregressive baseline)

Block Size = 16

TaskConcurrencyARMTPDFlash
Math5001197379 (1.9x)808 (4.1x)
814722569 (1.7x)5114 (3.5x)
1628314206 (1.5x)7508 (2.7x)
3247016028 (1.3x)9286 (2.0x)
GSM8K1198342 (1.7x)697 (3.5x)
814702331 (1.6x)4351 (3.0x)
1627813794 (1.4x)6325 (2.3x)
3245815445 (1.2x)7559 (1.6x)
HumanEval1193378 (2.0x)840 (4.4x)
814142461 (1.7x)4837 (3.4x)
1626383916 (1.5x)6722 (2.5x)
3242175423 (1.3x)8285 (2.0x)
MBPP1194335 (1.7x)755 (3.9x)
814212064 (1.5x)4202 (3.0x)
1626673358 (1.3x)5843 (2.2x)
3241604610 (1.1x)6961 (1.7x)
MT-Bench1194297 (1.5x)587 (3.0x)
814511945 (1.3x)3611 (2.5x)
1627873115 (1.1x)5185 (1.9x)
3245784453 (1.0x)6225 (1.4x)
Alpaca1197278 (1.4x)545 (2.8x)
814601816 (1.2x)3382 (2.3x)
1627893009 (1.1x)5002 (1.8x)
3245744326 (1.0x)6247 (1.4x)

Block Size = 8

TaskConcurrencyARMTPDFlash
Math5001195452 (2.3x)664 (3.4x)
814583199 (2.2x)4703 (3.2x)
1628255390 (1.9x)7804 (2.8x)
3247127941 (1.7x)11003 (2.3x)
GSM8K1196421 (2.1x)591 (3.0x)
814642954 (2.0x)4106 (2.8x)
1627754939 (1.8x)6733 (2.4x)
3245677246 (1.6x)9375 (2.1x)
HumanEval1193446 (2.3x)667 (3.5x)
814113020 (2.1x)4366 (3.1x)
1626314884 (1.9x)6815 (2.6x)
3240776819 (1.7x)8899 (2.2x)
MBPP1197409 (2.1x)634 (3.2x)
814402710 (1.9x)3992 (2.8x)
1626824435 (1.7x)6128 (2.3x)
3241526213 (1.5x)8026 (1.9x)
MT-Bench1198374 (1.9x)525 (2.7x)
814782612 (1.8x)3668 (2.5x)
1628364323 (1.5x)5905 (2.1x)
3246176335 (1.4x)8288 (1.8x)
Alpaca1196360 (1.8x)503 (2.6x)
814502497 (1.7x)3493 (2.4x)
1628024194 (1.5x)5714 (2.0x)
3245726175 (1.4x)8077 (1.8x)

Acceptance Length

Format: MTP / DFlash

TaskB8B16
Math5005.46 / 5.676.66 / 7.34
GSM8K5.27 / 5.336.37 / 6.71
HumanEval5.39 / 5.876.61 / 7.93
MBPP4.78 / 5.315.49 / 6.62
MT-Bench4.52 / 4.535.30 / 5.49
Alpaca4.38 / 4.355.03 / 5.10

Acknowledgements

Special thanks to David Wang for his outstanding engineering support on this project. We are also grateful to Modal, InnoMatrix, and Yotta Labs for providing the compute resources used to train this draft model.

Citation

If you find DFlash useful, please cite our work. To share feedback on DFlash or request new model support, please fill out this form: DFlash Feedback.

@article{chen2026dflash,
  title   = {{DFlash: Block Diffusion for Flash Speculative Decoding}},
  author  = {Chen, Jian and Liang, Yesheng and Liu, Zhijian},
  journal = {arXiv preprint arXiv:2602.06036},
  year    = {2026}
}
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes28
Downloads
📝

No reviews yet

Be the first to review z-lab/Qwen3.5-9B-DFlash!

Model Info

Providerz-lab
Categorycode
Reviews0
Avg. Rating / 5.0

Community

Likes28
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor