Back to Models
XP

XpressAI/Qwen3.6-27B-RYS-UD-Q4_K_XL-GGUF

XpressAIgeneral

Qwen3.6-27B — RYS Layer Surgery (GGUF)

A modified version of Qwen3.6-27B-Instruct produced by RYS layer duplication — no training, no weight changes, just running layers 33–36 a second time during the forward pass.

Based on David Ng's RYS method.


TL;DR

On the Berkeley Function-Call Leaderboard (BFCL v4, 100 tests/category × 13 single-turn categories, sampled), this variant beats the unmodified base model by +1.96 pp on average when run with thinking mode enabled — driven by large gains on the hardest live categories:

CategoryBaserys_33-36Δ
live_parallel68.75%87.50%+18.75
live_relevance68.75%81.25%+12.50
live_parallel_multiple70.83%75.00%+4.17
mean (13 categories)82.56%84.52%+1.96

The wins come from improved reasoning during prefill on multi-call / relevance-judgement queries. The trade is small regressions (−1 to −3 pp) on easier non-live categories. Thinking mode is required — without it, this variant slightly underperforms base.


Files

FileLayersSize
Qwen3.6-27B-rys_33-36-UD-Q4_K_XL.gguf6818 GiB

The base GGUF (no surgery) is at unsloth/Qwen3.6-27B-GGUF.


Internal probe results

A small probe of math, EQ, and reasoning prompts was run during the layer search. The probe categories are tiny (3 questions per reasoning subcategory, ~16 EQ-Bench-style items, ~16 math problems) so individual numbers should be treated as directional, not definitive.

MetricBaserys_33-36
Math (GSM8K-style partial credit)0.5370.500
EQ (EQ-Bench-style, 0–100)93.5986.64
Reasoning total (17 probes, 5 subcategories)0.7650.882
  ↳ causal0.671.00
  ↳ date1.001.00
  ↳ logic1.001.00
  ↳ navigation0.671.00
  ↳ gsm0.600.60

Layers 33–36 was the only configuration in the layer-block sweep that achieved a perfect score on the causal reasoning subcategory while keeping the other reasoning categories at or above their baseline. This is what motivated picking it for the BFCL run below.


BFCL results (sampled, thinking enabled)

CategoryBaserys_33-36
irrelevance90.0088.00
multiple96.0095.00
parallel93.0091.00
parallel_multiple87.0085.00
simple_java59.0061.00
simple_javascript74.0072.00
simple_python95.0092.00
live_irrelevance98.0099.00
live_multiple88.0087.00
live_parallel68.7587.50
live_parallel_multiple70.8375.00
live_relevance68.7581.25
live_simple85.0085.00
mean82.5684.52

Sample size: 100 tests/category for categories with ≥100 entries; the full category was used for the smaller ones (live_parallel, live_parallel_multiple, live_relevance, simple_javascript). 1006 tests per model in total. The full benchmark would be ~5x larger and would also cover multi-turn, memory, and web-search categories that we did not run.

Inference: llama.cpp llama-server --jinja, BFCL via /v1/chat/completions with native tool use, temperature=1.0, top_p=0.95, top_k=20, max_tokens=8192. Multi-turn, memory, and web-search categories were not run.


What is RYS?

Transformers self-organise during training into functional circuits — contiguous blocks of layers that act together. RYS duplicates a specific block in the forward pass using the same weights:

Normal:    0 → … → 32 → 33 → 34 → 35 → 36 → 37 → … → 63
rys_33-36: 0 → … → 32 → 33 → 34 → 35 → 36
                       → 33 → 34 → 35 → 36 → 37 → … → 63

The model processes layers 33–36 twice. No fine-tuning, no extra parameters beyond the GGUF file overhead. Total layer count goes from 64 → 68.


How the layer range was found

A two-pass sweep across all 64 layers using a small probe of math, EQ, and reasoning prompts:

  • Pass 1 (8-layer blocks, stride 4): identified hot zones around layers 32–48 (math gains, causal reasoning) and 48–60 (general reasoning gains).
  • Pass 2 (4-layer blocks, stride 1, layers 32–58): (33, 37) was the only configuration that achieved a perfect score on the probe's causal reasoning subcategory while keeping date, logic, and nav at their baseline ceilings.

The probe alone suggested rys_33-36 was a moderate win. The sampled BFCL run with thinking enabled confirms it on the harder live categories (above).

Extended evaluation (Ng's protocol)

After a thoughtful question on the discussion forum about deviations from David Ng's suggested reproduction path, we went back and ran the steps we had skipped:

Extended probemath_120 + eq_140 from Ng's repo, --reasoning off to match the protocol's intent (the math probe is designed for intuitive guessing, not deliberate computation):

Variantmath_120eq_140
base0.998674.53
rys_33-360.993078.81

On the larger probe rys_33-36 holds its EQ improvement (+4.28 pp). Math is at ceiling for both. Note this is the opposite direction from our small internal probe (where rys_33-36 had lower EQ) — small-probe variance was misleading us; the 140-question sample is the trustworthy reading.

Depth-2 beam search — 10 non-overlapping pair-combinations of the top single-block configs, each scored on the same probe:

Variantmath_120eq_140
rys_33-360.993078.81
rys_33-36 + 49-520.922675.66
rys_33-36 + 53-560.921975.27
rys_33-36 + 54-570.963972.21
rys_33-36 + 56-590.964374.21
rys_33-36 + 58-610.993068.78
rys_49-52 + 53-560.886466.70
rys_49-52 + 56-590.965469.67
rys_49-52 + 58-610.960669.18
rys_53-56 + 58-610.963563.57
rys_54-57 + 58-610.970359.93

No depth-2 combination beats rys_33-36 on EQ_140. Stacking blocks degrades math (sometimes catastrophically) without improving EQ. So the shortcut we took in candidate selection (no beam search) did not cost us a better configuration in this neighborhood. We did not train Ng's surrogate regressor or run a deeper beam search — those would explore more of the configuration space and might find something better.


Hybrid Mamba/attention architecture constraint

Qwen3.6-27B is a hybrid SSM/attention model (full_attention_interval = 4): full attention every 4th layer, Gated DeltaNet SSM everywhere else. This creates a hard constraint: the total layer count must remain divisible by 4.

  • Block size 4 → 64 + 4 = 68 layers (68 ÷ 4 = 17 ✓)
  • Block size 3 → 64 + 3 = 67 layers (67 ÷ 4 = 16.75 ✗ → crash)

Usage

llama.cpp / llama-server

The wins require thinking mode. Use --jinja so the server applies the Qwen3.6 chat template, which primes thinking properly:

llama-server -m Qwen3.6-27B-rys_33-36-UD-Q4_K_XL.gguf \
             --jinja \
             -ngl 99 -c 32768 \
             --port 8080

Sampling parameters (Qwen3.6 thinking-mode defaults)

temperature = 1.0
top_p       = 0.95
top_k       = 20
min_p       = 0.0

For more deterministic / coding-focused tasks, Qwen recommends temperature=0.6 instead. Either way, leave thinking enabled.

Token budget

Qwen3.6's thinking chains can be long (we observed up to ~7k tokens of reasoning on hard BFCL parallel cases). Set max_tokens ≥ 8192 to avoid truncating mid-thought.

VRAM

About 22 GiB at Q4_K_XL with 32k context and Q8 KV cache. Fits comfortably on a single A100 40 GB.


When to use this

  • You want better function-calling performance on complex live queries (parallel calls, relevance judgement) and you can afford ~6 extra layers of prefill compute.
  • You're running with thinking mode on (this is where the gain comes from).

When NOT to use this

  • You're running without thinking — base will be ~1.5 pp better.
  • You care about the very-easy categories (simple_python, multiple) more than the hard live ones — base is 1–3 pp better there.

Credits

License

Apache 2.0 (inherited from base model).

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes5
Downloads
📝

No reviews yet

Be the first to review XpressAI/Qwen3.6-27B-RYS-UD-Q4_K_XL-GGUF!

Model Info

ProviderXpressAI
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes5
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor