Back to Models
DO

douyamv/Gemma-4-31B-JANG_4M-CRACK-GGUF

douyamvgeneral

Gemma-4-31B-JANG_4M-CRACK-GGUF

GGUF quantizations of Gemma-4-31B-JANG_4M-CRACK for use with llama.cpp, LM Studio, Ollama, and other GGUF-compatible inference engines.

About the Model

  • Base model: google/gemma-4-31b-it
  • Architecture: Gemma 4 Dense Transformer (31B parameters, 60 layers)
  • Features: Hybrid Sliding/Global Attention, Vision + Audio multimodal
  • Modification: CRACK abliteration (refusal removal) + JANG v2 mixed-precision quantization

Why This Conversion?

The original model uses JANG v2 mixed-precision MLX quantization (attention 8-bit + MLP 4-bit), which is only compatible with vMLX. Standard tools (llama.cpp, LM Studio, oMLX, mlx-lm) cannot load this format due to mixed per-layer bit widths.

This repository provides standard GGUF quantizations that work everywhere.

Conversion Process

Original (JANG v2 MLX safetensors, ~18GB)
    ↓ dequantize (attention 8-bit → f16, MLP 4-bit → f16)
Intermediate (float16 safetensors, ~60GB)
    ↓ convert_hf_to_gguf.py + quantize
GGUF (various quantizations)

Note: Since the original was already quantized (avg 5.1 bits), the dequantized f16 intermediate is an approximation. Re-quantizing to GGUF introduces minimal additional quality loss since the attention layers were preserved at 8-bit in the original.

Available Quantizations

FileQuantSizeQualityNotes
gemma-4-31b-jang-crack-Q3_K_M.ggufQ3_K_M~14 GBAcceptableMinimum viable quality
gemma-4-31b-jang-crack-Q4_K_M.ggufQ4_K_M~18 GBGoodBest size/quality balance
gemma-4-31b-jang-crack-Q5_K_M.ggufQ5_K_M~21 GBBetterRecommended if RAM allows
gemma-4-31b-jang-crack-Q6_K.ggufQ6_K~25 GBVery GoodHigh quality
gemma-4-31b-jang-crack-Q8_0.ggufQ8_0~33 GBNear losslessClosest to original

System Requirements

QuantizationMinimum RAMRecommended
Q3_K_M20 GB24 GB
Q4_K_M24 GB32 GB
Q5_K_M28 GB36 GB
Q6_K32 GB40 GB
Q8_040 GB48 GB

Usage

LM Studio

Download any .gguf file and open it in LM Studio.

llama.cpp

./llama-cli -m gemma-4-31b-jang-crack-Q4_K_M.gguf -p "Hello" -n 256

Ollama

echo 'FROM ./gemma-4-31b-jang-crack-Q4_K_M.gguf' > Modelfile
ollama create gemma4-crack -f Modelfile
ollama run gemma4-crack

License

Gemma License

Disclaimer

This model has had safety guardrails removed. Use responsibly and in compliance with applicable laws.

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes159
Downloads
📝

No reviews yet

Be the first to review douyamv/Gemma-4-31B-JANG_4M-CRACK-GGUF!

Model Info

Providerdouyamv
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes159
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor