Back to Models
groxaxo logo

groxaxo/Qwen3.6-27B-GPTQ-Pro-4bit

groxaxoimage

Qwen36-27B-GPTQ-Pro-4Bit Banner

🚀 Qwen36-27B-GPTQ-Pro-4Bit

Welcome to Qwen36-27B-GPTQ-Pro-4Bit – a titan of reasoning and generation, elegantly squeezed into a remarkably efficient 4-bit package. It punches leagues above its weight class while keeping your VRAM happy and your inference speeds blazingly fast! Thank you Qwen team for another amazing model.

🌟 Why the "Pro"?

This isn't your average quantization. We used the GPTQ-Pro framework combined with the FOEM (First-Order Error Metric) approach. This advanced technique carefully preserves the most critical weights during the 4-bit compression process by evaluating the exact impact of quantization on the model's loss landscape.

The result?

  • Near-Lossless Performance: Enjoy the profound reasoning, coding prowess, and vast knowledge of a 27 Billion parameter model, but with a drastically reduced memory footprint.
  • Marlin Optimized: Ready out-of-the-box for Marlin kernels to deliver maximum token-per-second throughput in serving engines like vLLM.
  • Consumer Hardware Friendly: Fit a massive 27B powerhouse model on consumer GPUs with room to spare for massive context lengths!

This repository contains a 4-bit GPTQ-Pro quantization of unsloth/Qwen3.6-27B, produced with GPTQModel and the FOEM/GPTAQ-style quality settings used in the GPTQ-Pro project.

Source project: https://github.com/groxaxo/GPTQ-Pro

Deployment

vLLM

CUDA_VISIBLE_DEVICES=0,1 vllm serve groxaxo/Qwen3.6-27B-GPTQ-Pro-4Bit \
  --dtype float16 \
  --quantization gptq_marlin \
  --disable-custom-all-reduce \
  --tensor-parallel-size 2 \
  --max-model-len 132144 \
  --reasoning-parser qwen3 \
  --enable-auto-tool-choice \
  --tool-call-parser qwen3_coder \
  --gpu-memory-utilization 0.92

Local path

CUDA_VISIBLE_DEVICES=0,1 vllm serve /path/to/Qwen3.6-27B-GPTQ-Pro-4Bit \
  --dtype float16 \
  --quantization gptq_marlin \
  --disable-custom-all-reduce \
  --tensor-parallel-size 2 \
  --max-model-len 132144

Transformers

from gptqmodel import BACKEND, GPTQModel

model = GPTQModel.load(
    "groxaxo/Qwen3.6-27B-GPTQ-Pro-4Bit",
    backend=BACKEND.GPTQ_MARLIN,
    device="cuda:0",
)

print(model.generate("Write a short deployment checklist.", max_new_tokens=64)[0])

Notes

  • Tested with tensor parallel size 2 on RTX 3090 GPUs.
  • Use float16 and gptq_marlin for the most reliable vLLM startup path.
  • The quantization and serving workflow lives in the GPTQ-Pro repository above.
  • MTP/speculative decoding is detected by vLLM for this model, but on 2x RTX 3090 the exact --max-model-len 262144 launch OOMs during KV-cache setup.
  • The working local vLLM configuration I verified is --max-model-len 65536 with --enforce-eager; that starts and serves, but the current metrics showed spec_decode_num_accepted_tokens_total=0, so it does not improve speed yet.
  • If you test MTP, use --speculative-config '{"method":"mtp","num_speculative_tokens":2}' and disable thinking in the request payload when you want a plain answer.

⚡ Speed Benchmarks

Tested on 2× NVIDIA RTX 3090 with vLLM (gptq_marlin, tensor-parallel=2, float16).

MetricValue
Avg Generation Speed64.0 tok/s
Median Generation Speed64.0 tok/s
Peak Generation Speed65.0 tok/s
Avg Time-to-First-Token54 ms
Median TTFT56 ms
📋 Detailed Run Results

Test 1: Short Prompt → 256 Tokens (Streaming)

RunTTFTTokensSpeedTotal Time
160 ms25664.0 tok/s4.04s
255 ms25664.0 tok/s4.04s
356 ms25662.4 tok/s4.14s

Test 2: Medium Prompt → 512 Tokens (Non-Streaming)

RunTokensSpeedTotal Time
151262.9 tok/s8.15s
251263.0 tok/s8.13s
351262.9 tok/s8.14s

Test 3: Short Burst → 64 Tokens (Streaming)

RunTTFTTokensSpeed
150 ms6465.0 tok/s
256 ms6464.9 tok/s
356 ms6464.7 tok/s
454 ms6464.9 tok/s
548 ms6464.9 tok/s

📊 Quality Evaluation

  • Wikitext-2 test perplexity: 6.366 (n_ctx=1024)
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes31
Downloads
📝

No reviews yet

Be the first to review groxaxo/Qwen3.6-27B-GPTQ-Pro-4bit!

Model Info

Providergroxaxo
Categoryimage
Reviews0
Avg. Rating / 5.0

Community

Likes31
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor