Back to Models
mudler logo

mudler/gemma-4-26B-A4B-it-APEX-GGUF

mudlergeneral

⚡ Each donation = another big MoE quantized

I host 25+ free APEX MoE quantizations as independent research. My only local hardware is an NVIDIA DGX Spark (122 GB unified memory) — enough for ~30-50B-class MoEs, but bigger ones (200B+) require rented compute on H100/H200/Blackwell, typically $20-100 per quant.
If APEX quants are useful to you, your support directly funds those bigger runs.

🎉 Patreon (Monthly)  |  ☕ Buy Me a Coffee  |  ⭐ GitHub Sponsors

💚 Big thanks to Hugging Face for generously donating additional storage — much appreciated.

Gemma 4 26B-A4B APEX GGUF

APEX (Adaptive Precision for EXpert Models) quantizations of google/gemma-4-26B-A4B-it.

Brought to you by the LocalAI team | APEX Project | Technical Report

Benchmark Results

Benchmarks coming soon (re-quantized with llama.cpp b8664 including Gemma 4 tokenizer and logit softcapping fixes). For reference APEX benchmarks on the Qwen3.5-35B-A3B architecture, see mudler/Qwen3.5-35B-A3B-APEX-GGUF.

Available Files

FileProfileSizeBest For
gemma-4-26B-A4B-APEX-I-Balanced.ggufI-Balanced19 GBBest overall quality/size ratio
gemma-4-26B-A4B-APEX-I-Quality.ggufI-Quality20 GBHighest quality with imatrix
gemma-4-26B-A4B-APEX-Quality.ggufQuality20 GBHighest quality standard
gemma-4-26B-A4B-APEX-Balanced.ggufBalanced19 GBGeneral purpose
gemma-4-26B-A4B-APEX-I-Compact.ggufI-Compact15 GBConsumer GPUs, best quality/size
gemma-4-26B-A4B-APEX-Compact.ggufCompact15 GBConsumer GPUs
gemma-4-26B-A4B-APEX-I-Mini.ggufI-Mini13 GBSmallest viable, fastest inference
mmproj.ggufVision projector1.2 GBRequired for image understanding

What is APEX?

APEX is a quantization strategy for Mixture-of-Experts (MoE) models. It classifies tensors by role (routed expert, shared expert, attention) and applies a layer-wise precision gradient -- edge layers get higher precision, middle layers get more aggressive compression. I-variants use diverse imatrix calibration (chat, code, reasoning, tool-calling, agentic traces, Wikipedia).

See the APEX project for full details, technical report, and scripts.

Architecture

  • Model: Gemma 4 26B-A4B (google/gemma-4-26B-A4B-it)
  • Layers: 30
  • Experts: 128 routed (8 active per token)
  • Total Parameters: 26B
  • Active Parameters: ~4B per token
  • Vision: Built-in vision encoder (mmproj included)
  • APEX Config: 5+5 symmetric edge gradient across 30 layers
  • Calibration: v1.3 diverse dataset (chat, code, reasoning, multilingual, tool-calling, Wikipedia)
  • llama.cpp: Built with b8664 (includes Gemma 4 tokenizer fix, logit softcapping, newline split)

Run with LocalAI

local-ai run mudler/gemma-4-26B-A4B-it-APEX-GGUF@gemma-4-26B-A4B-APEX-I-Balanced.gguf

Credits

APEX is brought to you by the LocalAI team. Developed through human-driven, AI-assisted research. Built on llama.cpp.

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes51
Downloads
📝

No reviews yet

Be the first to review mudler/gemma-4-26B-A4B-it-APEX-GGUF!

Model Info

Providermudler
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes51
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor