Back to Models
mudler logo

mudler/gemma-4-26B-A4B-it-heretic-APEX-GGUF

mudlergeneral

⚡ Each donation = another big MoE quantized

I host 25+ free APEX MoE quantizations as independent research. My only local hardware is an NVIDIA DGX Spark (122 GB unified memory) — enough for ~30-50B-class MoEs, but bigger ones (200B+) require rented compute on H100/H200/Blackwell, typically $20-100 per quant.
If APEX quants are useful to you, your support directly funds those bigger runs.

🎉 Patreon (Monthly)  |  ☕ Buy Me a Coffee  |  ⭐ GitHub Sponsors

💚 Big thanks to Hugging Face for generously donating additional storage — much appreciated.

Gemma 4 26B-A4B Heretic (Abliterated) APEX GGUF

APEX (Adaptive Precision for EXpert Models) quantizations of gemma-4-26B-A4B-it-heretic — an abliterated (uncensored) version of Gemma 4, created with the Heretic tool (v1.2.0) using Arbitrary-Rank Ablation (ARA) on layers 10-30 to reduce refusals while preserving capabilities (KL divergence 0.0499 from original).

Brought to you by the LocalAI team | APEX Project | Technical Report

Benchmark Results

Benchmarks coming soon. For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see mudler/Qwen3.5-35B-A3B-APEX-GGUF.

Available Files

FileProfileSizeBest For
gemma-4-26B-A4B-heretic-APEX-I-Balanced.ggufI-Balanced~19 GBBest overall quality/size ratio
gemma-4-26B-A4B-heretic-APEX-I-Quality.ggufI-Quality~20 GBHighest quality with imatrix
gemma-4-26B-A4B-heretic-APEX-Quality.ggufQuality~20 GBHighest quality standard
gemma-4-26B-A4B-heretic-APEX-Balanced.ggufBalanced~19 GBGeneral purpose
gemma-4-26B-A4B-heretic-APEX-I-Compact.ggufI-Compact~15 GBConsumer GPUs, best quality/size
gemma-4-26B-A4B-heretic-APEX-Compact.ggufCompact~15 GBConsumer GPUs
gemma-4-26B-A4B-heretic-APEX-I-Mini.ggufI-Mini~13 GBSmallest viable, fastest inference
mmproj.ggufVision projector~1.2 GBRequired for image understanding

What is APEX?

APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).

See the APEX project for full details, technical report, and scripts.

Architecture

  • Model: gemma-4-26B-A4B-it-heretic (same architecture as gemma-4-26B-A4B-it)
  • Layers: 30
  • Experts: 128 routed (8 active per token)
  • Total Parameters: 26B
  • Active Parameters: ~4B per token
  • Vision: Built-in vision encoder (mmproj included)
  • APEX Config: 5+5 symmetric edge gradient across 30 layers
  • Calibration: v1.3 diverse dataset

Run with LocalAI

local-ai run mudler/gemma-4-26B-A4B-it-heretic-APEX-GGUF@gemma-4-26B-A4B-heretic-APEX-I-Balanced.gguf

Credits

APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes36
Downloads
📝

No reviews yet

Be the first to review mudler/gemma-4-26B-A4B-it-heretic-APEX-GGUF!

Model Info

Providermudler
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes36
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor