DavidAU/Qwen3.6-27B-Heretic-Uncensored-FINETUNE-NEO-CODE-Di-IMatrix-MAX-GGUF
DavidAU • imageUltimate: Exceeds Qwen 3.6 27B performance, uncensored and NEO-Di-Matrix quants to bring all that power in quant form. Q4/IQ4s clock in at 94% of full precision (BF16), with Q6 at just under 98%. Even IQ2_M : 83% of BF16. 5 Metrics per quant, plus benchmarks.
Qwen3.6-27B-Heretic-Uncensored-FINETUNE-NEO-CODE-Di-IMatrix-MAX-GGUF
Team Qwen exceeded all expectations with this new Qwen 3.6 27B model [even exceeding their own 398B model] AND GEMMA 4s too, so here are the balanced and precision quants to match.
And 256k context too. Check out Team Qwen's 3.6 27B detailed stats below.
And now :
Freedom (uncensored), a stronger model (than Qwen 3.6 27B) via tuning by Unsloth on a custom dataset, and ultimate GGUF quant performance as well using NEO/Code Di-Matrix.
DETAILS:
- Heretic'ed and de-censored: The "nanny" was "evicted" from Qwen 3.6 27B.
- Finetune via Unsloth post Heretic'ing, and now this model exceeds root Qwen 3.6 27B (censored) model performance.
- NEO/NEO Code Di-matrix GGUF performance; with Q4ks clocking in at 94% of BF16/Full precision performance.
Pure FREEDOM (Heretic stats):
Metric This model Original model (Qwen/Qwen3.6-27B)
KL divergence 0.0469 0 (by definition)
Refusals 4/100 99/100
KLD: Less than .3 is great, lower than this is excellent. This is a measurement of how much the "heretic" version differs from "original model".
STRONGER then Qwen 3.6 27B:
Low level fine tune (post Heretic'ing) to boost the model's core power just a wee bit, don't want to mess with the "Qwen magic":
IN HOUSE BENCHMARKS [by Nightmedia]:
arc-c arc/e boolq hswag obkqa piqa wino
Qwen3.6-27B-Heretic2-Uncensored-Finetune-Thinking
mxfp8 0.673,0.846,0.905... [instruct mode]
BASE UNTUNED MODEL:
Qwen3.6-27B (by Qwen) [instruct mode]
mxfp8 0.647,0.803,0.910,0.773,0.450,0.806,0.742
NOTE: Instruct mode will often test higher than "thinking" mode due to token usage in thinking and context limits.
NEO-CODE-Di-IMatrix-MAX-GGUF Quants:
Quant "engineering" focused on balance and precision, vs raw power (which seemed in some cases to destabilize the model/quant).
In other words benchmarks / stats determined the best quants, not guesswork or one size fits all approach.
This was done to ensure long context, long/multi-convos, coding and math etc etc performed as close as possible to full precision model as well as one-shot, and standard prompting / problem solving.
TWO Imatrix datasets were used to do this by first getting "raw stats" on both, then merging them to get the best of each imatrix in one dataset then this was used to make the "NEO-CODE-Di-IMatrix-MAX" quants.
Additional tensor adjustments were also made, which were also measured (benched) and adjusted too.
IQ2_M: 83% accuracy of BF16/full precision model at only 20% of the org model's size.
Q4K_S: 94% accuracy of BF16/full precision model at only 25% of the org model's size.
[ see full chart, all quants and compared to non-heretic quants too below.]
GGUF POWER UPS:
A radically stronger, more potent GGUF for all use cases.
Meets Unsloth quality, and exceeds it in some metrics (see below).
DETAILS:
- DI-MATRIX (duel imatrix) of NEO and NEO-CODE imatrix datasets (by DavidAU).
- All Unsloth tensor enhancements + additional enhancements CALIBRATED thru metrics testing.
- Every quant benchmarked against BF16/full precision model.
- There is a special Q8_0 quant, with BF16 components. Imatrix has no effect on Q8/BF16 tensors.
VISION:
- Vision (images) tested.
- You need an "mmproj" (just one) of these downloaded too, and placed in the same folder as the GGUF for images.
Qwen Model Settings (suggested):
- Thinking mode for general tasks: temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
- Thinking mode for precise coding tasks (e.g. WebDev): temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0
- Instruct (or non-thinking) mode: temperature=0.7, top_p=0.80, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
- Context window min from 8k to 16k.
FULL STATS, per quant and compared to "non-heretic" too:
This table shows all the quants for Heretic-FineTuned (this repo) and also compares against NEO-CODE-Dimatrix quants for non heretic (numbers are in "[]").
Non-Heretic quants are here:
https://huggingface.co/DavidAU/Qwen3.6-27B-NEO-CODE-Di-IMatrix-MAX-GGUF
| Metric | IQ2_M | IQ3_M | IQ4_XS | IQ4_NL | Q4_K_S | Q4_K_M | Q5_K_S | Q5_K_M | Q6_K | Q8_0 |
|---|---|---|---|---|---|---|---|---|---|---|
| Same Top P (%) | 82.82% [82.66%] | 89.76% [89.63%] | 94.14% [93.98%] | 94.19% [94.04%] | 94.06% [93.90%] | 94.51% [94.33%] | 95.89% [95.84 %] | 96.11% [96.09%] | 97.41% [97.34%] | 98.47% [98.38%] |
| Mean KLD | 0.1556 [0.1840] | 0.0569 [0.0749] | 0.0172 [0.0261] | 0.0169 [0.0260] | 0.0174 [0.0267] | 0.0147 [0.0242] | 0.0080 [0.0142] | 0.0069 [0.0132] | 0.0024 [0.0056] | 0.0013 [0.0034] |
| 99.9% KLD | 4.48 [7.22] | 1.77 [4.88] | 0.66 [2.18] | 0.65 [2.36] | 0.71 [2.34] | 0.58 [2.62] | 0.36 [1.69] | 0.29 [1.58] | 0.09 [0.50] | 0.05 [0.20] |
| RMS Δp (%) | 11.65% [12.52%] | 6.94% [7.69%] | 3.70% [4.34%] | 3.65% [4.36%] | 3.76% [4.41%] | 3.46% [4.13%] | 2.52% [3.22%] | 2.32% [3.019%] | 1.43% [1.988%] | 1.08% [1.538%] |
| Mean PPL (Q) | 7.549 [7.746] | 6.979 [7.222] | 6.769 [6.977] | 6.748 [6.971] | 6.757 [6.948] | 6.737 [6.946] | 6.684 (!) [(!) 6.894] | 6.678 (!) [(!) 6.885] | 6.685 (!) [6.924] | 6.695 [6.914] |
NOTES:
- With the exception of "Same Top P (%)" (how close it matchs full precision), lower is better for all other metrics.
- Numbers in [] are for NON-Heretic quants.
- "(!)" for "Mean PPL (Q)" are LOWER than the BF16/full (6.900) precision model; (!) Heretic Version is BF16/full is 6.688.
- Q8_0 contains BF16 components, and not affected by IMATRIX. This is an ULTIMATE PERFORMANCE quant. I have also put the full breakdown of metrics in detail for this quant below.
- Q2s/Q3s are not here because IQ2/3s are faster, and smaller with same or slightly better quality.
- To see how these metrics are generated see "SUPPLEMENT: Q6_K, Q8_0 ULTIMATE PERFORMANCE, detailed metrics" below.
A Beginner's Primer to Quantization Metrics
Quantization compresses Large Language Models to make them run faster and on cheaper hardware. To know if a model is still "smart" after compression, we use these five key metrics:
1. Same Top P (%)
What it is: How often the compressed model picks the exact same word as its first choice compared to the original, uncompressed model.
In plain English: This is "Word-for-Word Accuracy." If this is 94%, it means in 94 out of 100 cases, the model’s top choice remains identical to the original.
The Goal: Higher is better (93% or above is near-perfect).
2. Mean KLD (KL Divergence)
What it is: A measure of how much the "logic" or "thought process" of the model has drifted. It looks at the probabilities of all possible next words, not just the top one.
In plain English: This is the "Reasoning Loss." It measures how much the model's internal "brain" changed during compression.
The Goal: Lower is better (Below 0.03 is excellent).
3. 99.9% KLD (Stability)
What it is: This focuses on the "worst" 0.1% of tokens—the most difficult edge cases the model encountered during testing.
In plain English: This is the "Reliability Score." It tells you if the model is prone to "glitching" or producing gibberish when the conversation gets complicated.
The Goal: Lower is better (Lower numbers mean a more stable model).
4. RMS Δp (%)
What it is: The average change in the model’s confidence levels.
In plain English: This is "Confidence Alignment." Even if the model picks the right word, does it feel as sure as the original? High numbers mean the model feels "jittery" or hesitant.
The Goal: Lower is better (Ideally near 4%).
5. Mean PPL (Perplexity)
What it is: A measure of how "surprised" the model is by the text it is reading.
In plain English: This is "Fluency." If perplexity goes up significantly, the model’s writing will feel less natural, more robotic, or repetitive.
The Goal: Lower is better (Should be as close to the Base model as possible).
Quick Comparison Cheat Sheet
| Metric | Ideal Trend | What it Measures |
|---|---|---|
| Same Top P | ⬆ Higher | Accuracy & Word Choice |
| Mean KLD | ⬇ Lower | Logical Drift |
| 99.9% KLD | ⬇ Lower | Stability & Reliability |
| RMS Δp | ⬇ Lower | Confidence & Certainty |
| Mean PPL | ⬇ Lower | Fluency & Natural Flow |
SUPPLEMENT: Q6_K, Q8_0 ULTIMATE PERFORMANCE, detailed metrics
All the quants have this report.
Q6_K and Q8_0 - Ultimate, with BF16 components.
Report generated by:
-
Generate LOGICS.DAT file from the BF16 GGUF: ./perplexity -m w:/main.gguf -f wiki.test.raw --kl-divergence-base logits.dat
-
Generate Quant: ./llama-quantize ...
-
Test the Quant: ./llama-perplexity -m Q6_K.gguf -f wiki.test.raw --kl-divergence-base logits.dat --kl-divergence
"wiki.test.raw" is the standard for ppl testing, and has 580 chunks of testing/580 tests PER QUANT.
Q6_K
====== Perplexity statistics ======
Mean PPL(Q) : 6.685104 ± 0.042129
Mean PPL(base) : 6.687935 ± 0.042136
Cor(ln(PPL(Q)), ln(PPL(base))): 99.93%
Mean ln(PPL(Q)/PPL(base)) : -0.000423 ± 0.000227
Mean PPL(Q)/PPL(base) : 0.999577 ± 0.000227
Mean PPL(Q)-PPL(base) : -0.002832 ± 0.001520
====== KL divergence statistics ======
Mean KLD: 0.002458 ± 0.000147
Maximum KLD: 13.136569
99.9% KLD: 0.093266
99.0% KLD: 0.017381
95.0% KLD: 0.005969
90.0% KLD: 0.003742
Median KLD: 0.000984
10.0% KLD: 0.000019
5.0% KLD: 0.000004
1.0% KLD: -0.000001
0.1% KLD: -0.000010
Minimum KLD: -0.000086
====== Token probability statistics ======
Mean Δp: -0.011 ± 0.004 %
Maximum Δp: 99.656%
99.9% Δp: 9.144%
99.0% Δp: 3.501%
95.0% Δp: 1.674%
90.0% Δp: 0.990%
75.0% Δp: 0.214%
Median Δp: 0.000%
25.0% Δp: -0.232%
10.0% Δp: -1.040%
5.0% Δp: -1.728%
1.0% Δp: -3.579%
0.1% Δp: -9.776%
Minimum Δp: -76.380%
RMS Δp : 1.433 ± 0.048 %
Same top p: 97.408 ± 0.041 %
Q8_0
====== Perplexity statistics ======
Mean PPL(Q) : 6.695419 ± 0.042239
Mean PPL(base) : 6.687935 ± 0.042136
Cor(ln(PPL(Q)), ln(PPL(base))): 99.96%
Mean ln(PPL(Q)/PPL(base)) : 0.001118 ± 0.000174
Mean PPL(Q)/PPL(base) : 1.001119 ± 0.000175
Mean PPL(Q)-PPL(base) : 0.007484 ± 0.001171
====== KL divergence statistics ======
Mean KLD: 0.001326 ± 0.000074
Maximum KLD: 7.088220
99.9% KLD: 0.048507
99.0% KLD: 0.007663
95.0% KLD: 0.002989
90.0% KLD: 0.002106
Median KLD: 0.000536
10.0% KLD: 0.000006
5.0% KLD: 0.000001
1.0% KLD: -0.000002
0.1% KLD: -0.000013
Minimum KLD: -0.000070
====== Token probability statistics ======
Mean Δp: -0.015 ± 0.003 %
Maximum Δp: 58.314%
99.9% Δp: 5.862%
99.0% Δp: 2.932%
95.0% Δp: 1.436%
90.0% Δp: 0.657%
75.0% Δp: 0.095%
Median Δp: 0.000%
25.0% Δp: -0.086%
10.0% Δp: -0.718%
5.0% Δp: -1.621%
1.0% Δp: -3.037%
0.1% Δp: -6.171%
Minimum Δp: -59.254%
RMS Δp : 1.082 ± 0.027 %
Same top p: 98.474 ± 0.032 %
Information about this model from Qwen:
Qwen3.6-27B
[!Note] This repository contains model weights and configuration files for the post-trained model in the Hugging Face Transformers format.
These artifacts are compatible with Hugging Face Transformers, vLLM, SGLang, KTransformers, etc.
Following the February release of the Qwen3.5 series, we're pleased to share the first open-weight variant of Qwen3.6. Built on direct feedback from the community, Qwen3.6 prioritizes stability and real-world utility, offering developers a more intuitive, responsive, and genuinely productive coding experience.
Qwen3.6 Highlights
This release delivers substantial upgrades, particularly in
- Agentic Coding: the model now handles frontend workflows and repository-level reasoning with greater fluency and precision.
- Thinking Preservation: we've introduced a new option to retain reasoning context from historical messages, streamlining iterative development and reducing overhead.

For more details, please refer to our blog post Qwen3.6-27B.
Model Overview
- Type: Causal Language Model with Vision Encoder
- Training Stage: Pre-training & Post-training
- Language Model
- Number of Parameters: 27B
- Hidden Dimension: 5120
- Token Embedding: 248320 (Padded)
- Number of Layers: 64
- Hidden Layout: 16 × (3 × (Gated DeltaNet → FFN) → 1 × (Gated Attention → FFN))
- Gated DeltaNet:
- Number of Linear Attention Heads: 48 for V and 16 for QK
- Head Dimension: 128
- Gated Attention:
- Number of Attention Heads: 24 for Q and 4 for KV
- Head Dimension: 256
- Rotary Position Embedding Dimension: 64
- Feed Forward Network:
- Intermediate Dimension: 17408
- LM Output: 248320 (Padded)
- MTP: trained with multi-steps
- Context Length: 262,144 natively and extensible up to 1,010,000 tokens.
Benchmark Results
Language
| Qwen3.5-27B | Qwen3.5-397B-A17B | Gemma4-31B | Claude 4.5 Opus | Qwen3.6-35B-A3B | Qwen3.6-27B | |
|---|---|---|---|---|---|---|
| Coding Agent | ||||||
| SWE-bench Verified | 75.0 | 76.2 | 52.0 | 80.9 | 73.4 | 77.2 |
| SWE-bench Pro | 51.2 | 50.9 | 35.7 | 57.1 | 49.5 | 53.5 |
| SWE-bench Multilingual | 69.3 | 69.3 | 51.7 | 77.5 | 67.2 | 71.3 |
| Terminal-Bench 2.0 | 41.6 | 52.5 | 42.9 | 59.3 | 51.5 | 59.3 |
| SkillsBench Avg5 | 27.2 | 30.0 | 23.6 | 45.3 | 28.7 | 48.2 |
| QwenWebBench | 1068 | 1186 | 1197 | 1536 | 1397 | 1487 |
| NL2Repo | 27.3 | 32.2 | 15.5 | 43.2 | 29.4 | 36.2 |
| Claw-Eval Avg | 64.3 | 70.7 | 48.5 | 76.6 | 68.7 | 72.4 |
| Claw-Eval Pass^3 | 46.2 | 48.1 | 25.0 | 59.6 | 50.0 | 60.6 |
| QwenClawBench | 52.2 | 51.8 | 41.7 | 52.3 | 52.6 | 53.4 |
| Knowledge | ||||||
| MMLU-Pro | 86.1 | 87.8 | 85.2 | 89.5 | 85.2 | 86.2 |
| MMLU-Redux | 93.2 | 94.9 | 93.7 | 95.6 | 93.3 | 93.5 |
| SuperGPQA | 65.6 | 70.4 | 65.7 | 70.6 | 64.7 | 66.0 |
| C-Eval | 90.5 | 93.0 | 82.6 | 92.2 | 90.0 | 91.4 |
| STEM & Reasoning | ||||||
| GPQA Diamond | 85.5 | 88.4 | 84.3 | 87.0 | 86.0 | 87.8 |
| HLE | 24.3 | 28.7 | 19.5 | 30.8 | 21.4 | 24.0 |
| LiveCodeBench v6 | 80.7 | 83.6 | 80.0 | 84.8 | 80.4 | 83.9 |
| HMMT Feb 25 | 92.0 | 94.8 | 88.7 | 92.9 | 90.7 | 93.8 |
| HMMT Nov 25 | 89.8 | 92.7 | 87.5 | 93.3 | 89.1 | 90.7 |
| HMMT Feb 26 | 84.3 | 87.9 | 77.2 | 85.3 | 83.6 | 84.3 |
| IMOAnswerBench | 79.9 | 80.9 | 74.5 | 84.0 | 78.9 | 80.8 |
| AIME26 | 92.6 | 93.3 | 89.2 | 95.1 | 92.7 | 94.1 |
* SWE-Bench Series: Internal agent scaffold (bash + file-edit tools); temp=1.0, top_p=0.95, 200K context window. We correct some problematic tasks in the public set of SWE-bench Pro and evaluate all baselines on the refined benchmark.
* Terminal-Bench 2.0: Harbor/Terminus-2 harness; 3h timeout, 32 CPU/48 GB RAM; temp=1.0, top_p=0.95, top_k=20, max_tokens=80K, 256K ctx; avg of 5 runs.
* SkillsBench: Evaluated via OpenCode on 78 tasks (self-contained subset, excluding API-dependent tasks); avg of 5 runs.
* NL2Repo: Others are evaluated via Claude Code (temp=1.0, top_p=0.95, max_turns=900).
* QwenClawBench: A real-user-distribution Claw agent benchmark; temp=0.6, 256K ctx.
* QwenWebBench: An internal front-end code generation benchmark; bilingual (EN/CN), 7 categories (Web Design, Web Apps, Games, SVG, Data Visualization, Animation, and 3D); auto-render + multimodal judge (code/visual correctness); BT/Elo rating system.
* AIME 26: We use the full AIME 2026 (I & II), where the scores may differ from Qwen 3.5 notes.
Vision Language
| Qwen3.5-27B | Qwen3.5-397B-A17B | Gemma4-31B | Claude 4.5 Opus | Qwen3.6-35B-A3B | Qwen3.6-27B | |
|---|---|---|---|---|---|---|
| STEM & Puzzle | ||||||
| MMMU | 82.3 | 85.0 | 80.4 | 80.7 | 81.7 | 82.9 |
| MMMU-Pro | 75.0 | 79.0 | 76.9 | 70.6 | 75.3 | 75.8 |
| MathVista mini | 87.8 | -- | 79.3 | -- | 86.4 | 87.4 |
| DynaMath | 87.7 | 86.3 | 79.5 | 79.7 | 82.8 | 85.6 |
| VlmsAreBlind | 96.9 | -- | 87.2 | -- | 96.6 | 97.0 |
| General VQA | ||||||
| RealWorldQA | 83.7 | 83.9 | 72.3 | 77.0 | 85.3 | 84.1 |
| MMStar | 81.0 | 83.8 | 77.3 | 73.2 | 80.7 | 81.4 |
| MMBenchEN-DEV-v1.1 | 92.6 | -- | 90.9 | -- | 92.8 | 92.3 |
| SimpleVQA | 56.0 | 67.1 | 52.9 | 65.7 | 58.9 | 56.1 |
| Document Understanding | ||||||
| CharXiv RQ | 79.5 | 80.8 | 67.9 | 68.5 | 78.0 | 78.4 |
| CC-OCR | 81.0 | 82.0 | 75.7 | 76.9 | 81.9 | 81.2 |
| OCRBench | 89.4 | -- | 86.1 | -- | 90.0 | 89.4 |
| Spatial Intelligence | ||||||
| ERQA | 60.5 | 67.5 | 57.5 | 46.8 | 61.8 | 62.5 |
| CountBench | 97.8 | 97.2 | 96.1 | 90.6 | 96.1 | 97.8 |
| RefCOCO avg | 90.9 | 92.3 | -- | -- | 92.0 | 92.5 |
| EmbSpatialBench | 84.5 | -- | -- | -- | 84.3 | 84.6 |
| RefSpatialBench | 67.7 | -- | 4.7 | -- | 64.3 | 70.0 |
| Video Understanding | ||||||
| VideoMME(w sub.) | 87.0 | 87.5 | -- | 77.7 | 86.6 | 87.7 |
| VideoMMMU | 82.3 | 84.7 | 81.6 | 84.4 | 83.7 | 84.4 |
| MLVU | 85.9 | 86.7 | -- | 81.7 | 86.2 | 86.6 |
| MVBench | 74.6 | 77.6 | -- | 67.2 | 74.6 | 75.5 |
| Visual Agent | ||||||
| V* | 93.7 | 95.8 | -- | 67.0 | 90.1 | 94.7 |
| AndroidWorld | 64.2 | -- | -- | -- | -- | 70.3 |
* Empty cells (--) indicate scores not yet available or not applicable.
Quickstart
For streamlined integration, we recommend using Qwen3.6 via APIs. Below is a guide to use Qwen3.6 via OpenAI-compatible API.
Serving Qwen3.6
Qwen3.6 can be served via APIs with popular inference frameworks. In the following, we show example commands to launch OpenAI-Compatible API servers for Qwen3.6 models.
[!Important] Inference efficiency and throughput vary significantly across frameworks. We recommend using the latest framework versions to ensure optimal performance and compatibility. For production workloads or high-throughput scenarios, dedicated serving engines such as SGLang, KTransformers or vLLM are strongly recommended.
[!Important] The model has a default context length of 262,144 tokens. If you encounter out-of-memory (OOM) errors, consider reducing the context window. However, because Qwen3.6 leverages extended context for complex tasks, we advise maintaining a context length of at least 128K tokens to preserve thinking capabilities.
SGLang
SGLang is a fast serving framework for large language models and vision language models.
sglang>=0.5.10 is recommended for Qwen3.6, which can be installed using the following command in a fresh environment:
uv pip install sglang[all]
See its documentation for more details.
The following will create API endpoints at http://localhost:8000/v1:
-
Standard Version: The following command can be used to create an API endpoint with maximum context length 262,144 tokens using tensor parallel on 8 GPUs.
python -m sglang.launch_server --model-path Qwen/Qwen3.6-27B --port 8000 --tp-size 8 --mem-fraction-static 0.8 --context-length 262144 --reasoning-parser qwen3 -
Tool Use: To support tool use, you can use the following command.
python -m sglang.launch_server --model-path Qwen/Qwen3.6-27B --port 8000 --tp-size 8 --mem-fraction-static 0.8 --context-length 262144 --reasoning-parser qwen3 --tool-call-parser qwen3_coder -
Multi-Token Prediction (MTP): The following command is recommended for MTP:
python -m sglang.launch_server --model-path Qwen/Qwen3.6-27B --port 8000 --tp-size 8 --mem-fraction-static 0.8 --context-length 262144 --reasoning-parser qwen3 --speculative-algo NEXTN --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4
For detailed deployment guide, see the SGLang Qwen3.5 Cookbook.
vLLM
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs.
vllm>=0.19.0 is recommended for Qwen3.6, which can be installed using the following command in a fresh environment:
uv pip install vllm --torch-backend=auto
See its documentation for more details.
The following will create API endpoints at http://localhost:8000/v1:
-
Standard Version: The following command can be used to create an API endpoint with maximum context length 262,144 tokens using tensor parallel on 8 GPUs.
vllm serve Qwen/Qwen3.6-27B --port 8000 --tensor-parallel-size 8 --max-model-len 262144 --reasoning-parser qwen3 -
Tool Call: To support tool use, you can use the following command.
vllm serve Qwen/Qwen3.6-27B --port 8000 --tensor-parallel-size 8 --max-model-len 262144 --reasoning-parser qwen3 --enable-auto-tool-choice --tool-call-parser qwen3_coder -
Multi-Token Prediction (MTP): The following command is recommended for MTP:
vllm serve Qwen/Qwen3.6-27B --port 8000 --tensor-parallel-size 8 --max-model-len 262144 --reasoning-parser qwen3 --speculative-config '{"method":"qwen3_next_mtp","num_speculative_tokens":2}' -
Text-Only: The following command skips the vision encoder and multimodal profiling to free up memory for additional KV cache:
vllm serve Qwen/Qwen3.6-27B --port 8000 --tensor-parallel-size 8 --max-model-len 262144 --reasoning-parser qwen3 --language-model-only
For detailed deployment guide, see the vLLM Qwen3.5 Recipe.
KTransformers
KTransformers is a flexible framework for experiencing cutting-edge LLM inference optimizations with CPU-GPU heterogeneous computing. For running Qwen3.6 with KTransformers, see the KTransformers Deployment Guide.
Hugging Face Transformers
Hugging Face Transformers contains a lightweight server which can be used for quick testing and moderate load deployment.
The latest transformers is required for Qwen3.6:
pip install "transformers[serving]"
See its documentation for more details. Please also make sure torchvision and pillow are installed.
Then, run transformers serve to launch a server with API endpoints at http://localhost:8000/v1; it will place the model on accelerators if available:
transformers serve Qwen/Qwen3.6-27B --port 8000 --continuous-batching
Using Qwen3.6 via the Chat Completions API
The chat completions API is accessible via standard HTTP requests or OpenAI SDKs. Here, we show examples using the OpenAI Python SDK.
Before starting, make sure it is installed and the API key and the API base URL is configured, e.g.:
pip install -U openai
# Set the following accordingly
export OPENAI_BASE_URL="http://localhost:8000/v1"
export OPENAI_API_KEY="EMPTY"
[!Tip] We recommend using the following set of sampling parameters for generation
- Thinking mode for general tasks:
temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0- Thinking mode for precise coding tasks (e.g. WebDev):
temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0- Instruct (or non-thinking) mode:
temperature=0.7, top_p=0.80, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0Please note that the support for sampling parameters varies according to inference frameworks.
[!Important] Qwen3.6 models operate in thinking mode by default, generating thinking content signified by
<think>\n...</think>\n\nbefore producing the final responses. To disable thinking content and obtain direct response, refer to the examples here.
Text-Only Input
from openai import OpenAI
# Configured by environment variables
client = OpenAI()
messages = [
{"role": "user", "content": "Type \"I love Qwen3.6\" backwards"},
]
chat_response = client.chat.completions.create(
model="Qwen/Qwen3.6-27B",
messages=messages,
max_tokens=81920,
temperature=1.0,
top_p=0.95,
presence_penalty=0.0,
extra_body={
"top_k": 20,
},
)
print("Chat response:", chat_response)
Image Input
from openai import OpenAI
# Configured by environment variables
client = OpenAI()
messages = [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.5/demo/CI_Demo/mathv-1327.jpg"
}
},
{
"type": "text",
"text": "The centres of the four illustrated circles are in the corners of the square. The two big circles touch each other and also the two little circles. With which factor do you have to multiply the radii of the little circles to obtain the radius of the big circles?\nChoices:\n(A) $\\frac{2}{9}$\n(B) $\\sqrt{5}$\n(C) $0.8 \\cdot \\pi$\n(D) 2.5\n(E) $1+\\sqrt{2}$"
}
]
}
]
response = client.chat.completions.create(
model="Qwen/Qwen3.6-27B",
messages=messages,
max_tokens=81920,
temperature=1.0,
top_p=0.95,
presence_penalty=0.0,
extra_body={
"top_k": 20,
},
)
print("Chat response:", chat_response)
Video Input
from openai import OpenAI
# Configured by environment variables
client = OpenAI()
messages = [
{
"role": "user",
"content": [
{
"type": "video_url",
"video_url": {
"url": "https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.5/demo/video/N1cdUjctpG8.mp4"
}
},
{
"type": "text",
"text": "How many porcelain jars were discovered in the niches located in the primary chamber of the tomb?"
}
]
}
]
# When vLLM is launched with `--media-io-kwargs '{"video": {"num_frames": -1}}'`,
# video frame sampling can be configured via `extra_body` (e.g., by setting `fps`).
# This feature is currently supported only in vLLM.
#
# By default, `fps=2` and `do_sample_frames=True`.
# With `do_sample_frames=True`, you can customize the `fps` value to set your desired video sampling rate.
response = client.chat.completions.create(
model="Qwen/Qwen3.6-27B",
messages=messages,
max_tokens=81920,
temperature=1.0,
top_p=0.95,
presence_penalty=0.0,
extra_body={
"top_k": 20,
"mm_processor_kwargs": {"fps": 2, "do_sample_frames": True},
},
)
print("Chat response:", chat_response)
Instruct (or Non-Thinking) Mode
[!Important] Qwen3.6 does not officially support the soft switch of Qwen3, i.e.,
/thinkand/nothink.
Qwen3.6 will think by default before response. You can obtain direct response from the model without thinking by configuring the API parameters. For example,
from openai import OpenAI
# Configured by environment variables
client = OpenAI()
messages = [
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://qianwen-res.oss-accelerate.aliyuncs.com/Qwen3.6/demo/RealWorld/RealWorld-04.png"
}
},
{
"type": "text",
"text": "Where is this?"
}
]
}
]
chat_response = client.chat.completions.create(
model="Qwen/Qwen3.6-27B",
messages=messages,
max_tokens=32768,
temperature=0.7,
top_p=0.8,
presence_penalty=1.5,
extra_body={
"top_k": 20,
"chat_template_kwargs": {"enable_thinking": False},
},
)
print("Chat response:", chat_response)
[!Note] If you are using APIs from Alibaba Cloud Model Studio, in addition to changing
model, please use"enable_thinking": Falseinstead of"chat_template_kwargs": {"enable_thinking": False}.
Preserve Thinking
By default, only the thinking blocks generated in handling the latest user message is retained, resulting in a pattern commonly as interleaved thinking.
Qwen3.6 has been additionally trained to preserve and leverage thinking traces from historical messages.
You can enable this behavior by setting the preserve_thinking option:
from openai import OpenAI
# Configured by environment variables
client = OpenAI()
messages = [...]
chat_response = client.chat.completions.create(
model="Qwen/Qwen3.6-27B",
messages=messages,
max_tokens=32768,
temperature=0.6,
top_p=0.95,
presence_penalty=0.0,
extra_body={
"top_k": 20,
"chat_template_kwargs": {"preserve_thinking": True},
},
)
print("Chat response:", chat_response)
[!Note] If you are using APIs from Alibaba Cloud Model Studio, in addition to changing
model, please use"preserve_thinking": Trueinstead of"chat_template_kwargs": {"preserve_thinking": False}.
This capability is particularly beneficial for agent scenarios, where maintaining full reasoning context can enhance decision consistency and, in many cases, reduce overall token consumption by minimizing redundant reasoning. Additionally, it can improve KV cache utilization, optimizing inference efficiency in both thinking and non-thinking modes.
Agentic Usage
Qwen3.6 excels in tool calling capabilities.
Qwen-Agent
We recommend using Qwen-Agent to quickly build Agent applications with Qwen3.6.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
import os
from qwen_agent.agents import Assistant
# Define LLM
# Using Alibaba Cloud Model Studio
llm_cfg = {
# Use the OpenAI-compatible model service provided by DashScope:
'model': 'qwen3.6-27b',
'model_type': 'qwenvl_oai',
'model_server': 'https://dashscope.aliyuncs.com/compatible-mode/v1',
'api_key': os.getenv('DASHSCOPE_API_KEY'),
'generate_cfg': {
'use_raw_api': True,
# When using Dash Scope OAI API, pass the parameter of whether to enable thinking mode in this way
'extra_body': {
'enable_thinking': True,
'preserve_thinking': True,
},
},
}
# Using OpenAI-compatible API endpoint.
# functionality of the deployment frameworks and let Qwen-Agent automate the related operations.
#
# llm_cfg = {
# # Use your own model service compatible with OpenAI API by vLLM/SGLang:
# 'model': 'Qwen/Qwen3.6-27B',
# 'model_type': 'qwenvl_oai',
# 'model_server': 'http://localhost:8000/v1', # api_base
# 'api_key': 'EMPTY',
#
# 'generate_cfg': {
# 'use_raw_api': True,
# # When using vLLM/SGLang OAI API, pass the parameter of whether to enable thinking mode in this way
# 'extra_body': {
# 'chat_template_kwargs': {'enable_thinking': True, 'preserve_thinking': True}
# },
# },
# }
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/xxxx/Desktop"]
}
}
}
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'Help me organize my desktop.'}]
for responses in bot.run(messages=messages):
pass
print(responses)
# Streaming generation
messages = [{'role': 'user', 'content': 'Develop a dog website and save it on the desktop'}]
for responses in bot.run(messages=messages):
pass
print(responses)
Qwen Code
Qwen Code is an open-source AI agent for the terminal, optimized for Qwen models. It helps you understand large codebases, automate tedious work, and ship faster.
For more information, please refer to Qwen Code.
Processing Ultra-Long Texts
Qwen3.6 natively supports context lengths of up to 262,144 tokens. For long-horizon tasks where the total length (including both input and output) exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively., e.g., YaRN.
YaRN is currently supported by several inference frameworks, e.g., transformers, vllm, ktransformers and sglang.
In general, there are two approaches to enabling YaRN for supported frameworks:
-
Modifying the model configuration file: In the
config.jsonfile, change therope_parametersfields intext_configto:{ "mrope_interleaved": true, "mrope_section": [ 11, 11, 10 ], "rope_type": "yarn", "rope_theta": 10000000, "partial_rotary_factor": 0.25, "factor": 4.0, "original_max_position_embeddings": 262144, } -
Passing command line arguments:
For
vllm, you can useVLLM_ALLOW_LONG_MAX_MODEL_LEN=1 vllm serve ... --hf-overrides '{"text_config": {"rope_parameters": {"mrope_interleaved": true, "mrope_section": [11, 11, 10], "rope_type": "yarn", "rope_theta": 10000000, "partial_rotary_factor": 0.25, "factor": 4.0, "original_max_position_embeddings": 262144}}}' --max-model-len 1010000For
sglangandktransformers, you can useSGLANG_ALLOW_OVERWRITE_LONGER_CONTEXT_LEN=1 python -m sglang.launch_server ... --json-model-override-args '{"text_config": {"rope_parameters": {"mrope_interleaved": true, "mrope_section": [11, 11, 10], "rope_type": "yarn", "rope_theta": 10000000, "partial_rotary_factor": 0.25, "factor": 4.0, "original_max_position_embeddings": 262144}}}' --context-length 1010000
[!NOTE] All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise modifying the
rope_parametersconfiguration only when processing long contexts is required. It is also recommended to modify thefactoras needed. For example, if the typical context length for your application is 524,288 tokens, it would be better to setfactoras 2.0.
Best Practices
To achieve optimal performance, we recommend the following settings:
-
Sampling Parameters:
- We suggest using the following sets of sampling parameters depending on the mode and task type:
- Thinking mode for general tasks:
temperature=1.0,top_p=0.95,top_k=20,min_p=0.0,presence_penalty=0.0,repetition_penalty=1.0 - Thinking mode for precise coding tasks (e.g., WebDev):
temperature=0.6,top_p=0.95,top_k=20,min_p=0.0,presence_penalty=0.0,repetition_penalty=1.0 - Instruct (or non-thinking) mode:
temperature=0.7,top_p=0.80,top_k=20,min_p=0.0,presence_penalty=1.5,repetition_penalty=1.0
- Thinking mode for general tasks:
- For supported frameworks, you can adjust the
presence_penaltyparameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
- We suggest using the following sets of sampling parameters depending on the mode and task type:
-
Adequate Output Length: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 81,920 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
-
Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
- Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the
answerfield with only the choice letter, e.g.,"answer": "C"."
-
Long Video Understanding: To optimize inference efficiency for plain text and images, the
sizeparameter in the releasedvideo_preprocessor_config.jsonis conservatively configured. It is recommended to set thelongest_edgeparameter in the video_preprocessor_config file to 469,762,048 (corresponding to 224k video tokens) to enable higher frame-rate sampling for hour-scale videos and thereby achieve superior performance. For example,{"longest_edge": 469762048, "shortest_edge": 4096}Alternatively, override the default values via engine startup parameters. For implementation details, refer to: vLLM / SGLang.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwen3.6-27b,
title = {{Qwen3.6-27B}: Flagship-Level Coding in a {27B} Dense Model},
author = {{Qwen Team}},
month = {April},
year = {2026},
url = {https://qwen.ai/blog?id=qwen3.6-27b}
}