Back to Models
RI

rico03/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled-GGUF

rico03general

Qwen3.6-27B — Claude Opus Reasoning Distilled · GGUF

GGUF quantized versions of rico03/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled for use with llama.cpp, Ollama, LM Studio, and any GGUF-compatible runtime.

🙏 This model was trained following the methodology by Jackrong, adapted for Qwen3.6-27B.


🎯 What Is This?

Qwen3.6-27B fine-tuned on ~14k Claude 4.6 Opus reasoning traces. The model adopts a structured, efficient thinking style — concise on simple tasks, deep on hard ones — while fully preserving the base model's exceptional coding and math capabilities.

Key improvement over base Qwen3.6-27B: reduced verbose reasoning loops, replaced with Claude-style structured step-by-step decomposition.

Base model benchmark:

Benchmark Results


📦 Available Quantizations

Choose based on your available VRAM/RAM:

FileSizeMin VRAMQualityRecommended For
Q2_K~10GB12GB⭐⭐Very limited hardware
Q3_K_M~13GB16GB⭐⭐⭐Budget setups
Q4_K_S~16GB20GB⭐⭐⭐⭐Good balance
Q4_K_M16.5GB20GB⭐⭐⭐⭐ ✅ Best choiceMost users
Q5_K_S~19GB24GB⭐⭐⭐⭐⭐High quality
Q5_K_M~20GB24GB⭐⭐⭐⭐⭐High quality
Q6_K~23GB28GB⭐⭐⭐⭐⭐Near-lossless
Q8_028.6GB36GB⭐⭐⭐⭐⭐Maximum quality

Q4_K_M is recommended for most users — best quality-to-size ratio, runs on a 24GB GPU with headroom.


🚀 Quick Start

llama.cpp

# Download
huggingface-cli download rico03/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled-GGUF \
  --include "*Q4_K_M*" --local-dir ./model

# Run CLI
./llama-cli \
  -m ./model/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled-Q4_K_M.gguf \
  --temp 0.6 \
  --top-p 0.95 \
  --top-k 20 \
  --presence-penalty 1.5 \
  --ctx-size 8192 \
  -p "Implement a red-black tree in Python with insert and delete."

# Run as server (OpenAI-compatible API)
./llama-server \
  -m ./model/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled-Q4_K_M.gguf \
  --temp 0.6 \
  --top-p 0.95 \
  --top-k 20 \
  --ctx-size 8192 \
  --port 8080

Ollama

# Create Modelfile
cat > Modelfile << 'EOF'
FROM rico03/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled-GGUF:Q4_K_M
PARAMETER temperature 0.6
PARAMETER top_p 0.95
PARAMETER top_k 20
PARAMETER num_ctx 8192
EOF

ollama create qwen36-opus -f Modelfile
ollama run qwen36-opus

LM Studio

Search for rico03/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled-GGUF in the model browser and download your preferred quantization.

OpenAI-compatible API (llama-server)

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8080/v1", api_key="none")

response = client.chat.completions.create(
    model="qwen3.6-27b-opus",
    messages=[{"role": "user", "content": "Write a merge sort implementation in Python."}],
    max_tokens=4096,
    temperature=0.6,
    top_p=0.95,
)
print(response.choices[0].message.content)

⚙️ Recommended Sampling Parameters

Modetemperaturetop_ptop_kpresence_penalty
Thinking (general)1.00.95200.0
Thinking (coding)0.60.95200.0
Non-thinking0.70.80201.5

🧠 Example Output Style

The model always reasons before answering:

<think>
Let me analyze this request carefully:

1. Identify the core objective...
2. Break the task into subcomponents...
3. Evaluate constraints and edge cases...
4. Formulate a step-by-step solution...
</think>

[Final Answer]

📊 Base Model Performance

BenchmarkQwen3.6-27BClaude 4.5 OpusQwen3.5-397B
SWE-bench Verified77.280.976.2
SWE-bench Pro53.557.150.9
Terminal-Bench 2.059.359.352.5
AIME 202694.195.193.3
GPQA Diamond87.887.088.4
MMLU-Pro86.289.587.8

Source: Qwen3.6-27B official release


📖 Citation

@misc{rico03-qwen36-opus-reasoning,
  title  = {Qwen3.6-27B Claude Opus Reasoning Distilled},
  author = {rico03},
  year   = {2026},
  url    = {https://huggingface.co/rico03/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled}
}

🙏 Acknowledgements


Released for research and personal use.

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes17
Downloads
📝

No reviews yet

Be the first to review rico03/Qwen3.6-27B-Claude-Opus-Reasoning-Distilled-GGUF!

Model Info

Providerrico03
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes17
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor