Back to Models
FINAL-Bench logo

FINAL-Bench/Darwin-36B-Opus

FINAL-Benchgeneral

Darwin-36B-Opus: Darwin V7 Evolutionary Merge on Qwen3.6-35B-A3B — 88.4% on GPQA Diamond

GPQA Sibling

Genesis 9B 27B 31B

36B

Family FINAL Bench

Qwen3.6-35B-A3B MoE | 36B total / 3B active | Thinking Mode | 262K Context | Multilingual | BF16 | Apache 2.0 Darwin V7 evolutionary merge: Father × Opus-distilled Mother → 88.4% on GPQA Diamond


Abstract

Darwin-36B-Opus is a 36-billion-parameter mixture-of-experts (MoE) language model produced by the Darwin V7 evolutionary breeding engine from two publicly available parents:

Darwin V7 recombines these two parents into a single descendant that preserves the Mother's distilled chain-of-thought behavior while retaining the structural fidelity of the Father's expert topology. The breeding process is fully automated and produces a deployable bfloat16 checkpoint in under an hour on a single GPU.

On the GPQA Diamond benchmark — 198 graduate-level questions in physics, chemistry, and biology — Darwin-36B-Opus achieves 88.4%, establishing it as the highest-performing model in the Darwin family and extending the series' record of producing state-of-the-art open models through evolution rather than retraining.


GPQA Diamond Leaderboard (April 23, 2026)

RankModelParametersGPQA Diamond
1TNSA/NGen-4-Pro91.1%
2TNSA/NGen-490.1%
3Qwen/Qwen3.5-397B-A17B397B88.4%
3FINAL-Bench/Darwin-36B-Opus36B (A3B)88.4%
5moonshotai/Kimi-K2.587.6%
6FINAL-Bench/Darwin-27B-Opus27B86.9%
7Qwen/Qwen3.5-122B-A10B122B86.6%
8zai-org/GLM-5.1744B86.2%
9zai-org/GLM-5744B86.0%
10zai-org/GLM-4.785.7%

A 36B-parameter MoE model (3B active), tying the 397B dense-equivalent Qwen3.5-397B-A17B and surpassing flagship dense and sparse systems an order of magnitude larger.


What Is Darwin?

Darwin is the evolutionary model breeding engine developed by FINAL-Bench / VIDRAFT_LAB. Rather than allocating further compute to gradient optimization, Darwin treats trained checkpoints as a genetic pool and discovers high-performing descendants through principled recombination of their weight tensors.

Each Darwin generation (v1 through v7+) refines the breeding procedure. Darwin V7 is the current generation and the one used to produce this model. Specific algorithmic details of V7 are proprietary to FINAL-Bench; at a high level, the engine performs:

  1. Per-tensor compatibility analysis of the two parents to identify which components transfer cleanly and which require weighted recombination.
  2. Automated recombination guided by that analysis, producing a single coherent descendant.
  3. Verification via a multi-phase scientific benchmark before release.

All Darwin models are released under Apache 2.0 and inherit fully from the parents' open-source licenses.


Parent Models

🔵 Father — Qwen/Qwen3.6-35B-A3B

  • Model type: Qwen3.6 MoE, 35B total / ~3B active parameters
  • Layers: 40, Hidden size: 2048
  • Attention: hybrid 75% Gated DeltaNet + 25% Gated Attention (alternating)
  • Experts: 256 routed (top-8) + 1 shared per layer
  • Native scores: MMLU-Pro 85.2%, GPQA 86.0%, AIME26 92.7%
  • Role: Structural backbone and MoE topology donor.

🔴 Mother — hesamation/Qwen3.6-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled

  • Method: LoRA SFT on the Father over 14,233 Claude Opus 4.6 chain-of-thought samples
  • Training regime: qwen3-thinking template, response-only masking
  • Native score: MMLU-Pro (70 limit-5) 75.71%, +32.85 percentage points over the un-distilled Father baseline
  • Role: Reasoning signal donor — the source whose <think> trajectories Darwin preserves.

Evolution Process (High Level)

Darwin V7 produces the descendant through a deterministic recombination that does not require gradient optimization on the final assembly. The engine analyzes each tensor in both parents, classifies it by architectural role, and assigns a recombination weight appropriate to that role — biasing toward the Mother for components that carry reasoning behavior (attention, shared experts, embeddings) while preserving the Father's structural contributions where they dominate.

Total breeding time on a single B200 GPU: under 10 minutes.


GPQA Diamond Evaluation

Methodology

We employed a two-pass adaptive evaluation protocol (identical across all Darwin Opus models to preserve cross-model comparability):

Pass 1 — Greedy Baseline

  • All 198 GPQA Diamond questions, deterministic decoding (do_sample=False)
  • Maximum 5,120 new tokens per question (allows full <think> trajectories)
  • Standard multiple-choice prompt format

Pass 2 — Stochastic Retry with Tiebreaker

  • Questions incorrectly answered in Pass 1 are re-evaluated with majority-of-8 stochastic generations (temperature=0.7, max_tokens=5120)
  • Where the vote margin is inconclusive (3:3, 3:4, or 4:4), an additional 16-vote combined tiebreaker round (temperature=0.5) resolves the answer

Evaluation was performed in parallel across 8 × NVIDIA B200 GPUs, each running an independent full copy of the model on a disjoint subset of the benchmark (round-robin question assignment).

Aggregate Results

PhaseCumulative CorrectAccuracyΔ
Pass 1 — Greedy Baseline145/19873.2%baseline
Pass 2 — Stochastic Retry175/19888.4%+15.2 percentage points

The Pass-2 gain of +30 questions (+15.2 pp) demonstrates that the Mother's inherited <think> reasoning yields substantially more correct answers under stochastic decoding than under greedy, confirming that the evolutionary merge preserved reasoning depth.

Results by Shard

GPUQuestionsPass 1 GreedyFinal
GPU02517/25 (68.0%)22/25 (88.0%)
GPU12517/25 (68.0%)20/25 (80.0%)
GPU22519/25 (76.0%)23/25 (92.0%)
GPU32521/25 (84.0%)25/25 (100.0%)
GPU42520/25 (80.0%)23/25 (92.0%)
GPU52517/25 (68.0%)22/25 (88.0%)
GPU62417/24 (70.8%)20/24 (83.3%)
GPU72417/24 (70.8%)20/24 (83.3%)
Total198145/198 (73.2%)175/198 (88.4%)

Notably, GPU3 achieved a perfect 25/25 score on its 25-question partition — every Pass-1 error on that shard was successfully recovered through the stochastic retry cascade.


Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tok = AutoTokenizer.from_pretrained("FINAL-Bench/Darwin-36B-Opus", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    "FINAL-Bench/Darwin-36B-Opus",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

messages = [
    {"role": "user", "content": "Derive the equation for relativistic kinetic energy."}
]
text = tok.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tok(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=5120, temperature=0.6, do_sample=True)
print(tok.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))

Answer Extraction for Evaluations

This is a thinking model — responses always begin with a <think> reasoning trace. For benchmarks, extract the final answer after </think>:

response = tok.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True)
idx = response.rfind("</think>")
answer_part = response[idx + len("</think>"):].strip() if idx >= 0 else response

Recommended Settings

  • Temperature: 0.6–0.7 for reasoning / majority voting; 0.0 for greedy deterministic
  • max_new_tokens: ≥5120 to accommodate full <think> trajectories
  • Chat template: <|im_start|>assistant\n<think>\n auto-inserted by apply_chat_template(add_generation_prompt=True)

Model Specifications

ArchitectureQwen3MoE (Qwen3.6 codebase)
Total parameters36.0 B
Active parameters~3 B (top-8 of 256 routed experts per layer)
Layers40
Hidden size2048
Attention heads24 Q + 4 KV (GQA)
Head dimension256
Experts per layer256 routed + 1 shared
Context length262,144 tokens
Vocabulary248,320
Dtypebfloat16
Checkpoint size~65 GB (21 shards)
LicenseApache 2.0

VRAM Requirements

PrecisionVRAMRecommended GPU
bf16 (full)~72 GB1× H100 80GB / 1× B200
8-bit~40 GB1× A100 40GB+ / 1× L40S
4-bit~22 GB1× RTX 4090 / 1× A10

Darwin Model Family

ModelBaseParamsGPQA Diamond
Darwin-4B-GenesisQwen3.5-4B4 B
Darwin-9B-OpusQwen3.5-9B9 B
Darwin-27B-OpusQwen3.5-27B27 B86.9%
Darwin-31B-OpusGemma2-27B × variants31 B85.9%
Darwin-36B-OpusQwen3.6-35B-A3B36 B (A3B)88.4%

Key Findings

  1. Evolutionary merging continues to scale. Across three successive parameter tiers (27B → 31B → 36B), each new Darwin Opus model surpasses the prior one's GPQA Diamond score while maintaining the same zero-training methodology.

  2. Hybrid-attention MoE preserves reasoning under recombination. The Father's 75% Gated-DeltaNet + 25% Gated-Attention architecture, inherited intact, demonstrates robustness to tensor-level recombination — a notable result given that MoE expert routing is sensitive to weight perturbation.

  3. Stochastic retry closes the greedy gap. The +15.2 percentage-point lift from Pass 1 (73.2%) to Pass 2 (88.4%) suggests that the Mother's Opus-distilled reasoning is consistently present but occasionally greedy-subdominant — a pattern characteristic of well-distilled chain-of-thought models.


References

  • Idavidrein et al., GPQA: A Graduate-Level Google-Proof Q&A Benchmark, 2024. dataset
  • Qwen Team, Qwen3.6 Technical Report, 2026.

Built By

FINAL-Bench / VIDRAFT_LAB — Darwin V7 evolutionary breeding engine.

  • Father base weights by the Qwen Team.
  • Mother by @hesamation (Claude Opus 4.6 as teacher).

Citation

@misc{darwin-36b-opus,
  title   = {Darwin-36B-Opus: Darwin V7 Evolutionary Merge on Qwen3.6-35B-A3B},
  author  = {FINAL-Bench and VIDRAFT_LAB},
  year    = {2026},
  url     = {https://huggingface.co/FINAL-Bench/Darwin-36B-Opus},
  note    = {Qwen3.6-35B-A3B (Father) × Opus-distilled variant (Mother), Darwin V7 engine, 88.4% GPQA Diamond}
}
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes49
Downloads
📝

No reviews yet

Be the first to review FINAL-Bench/Darwin-36B-Opus!

Model Info

ProviderFINAL-Bench
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes49
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor