Back to Models
TR

TrevorJS/gemma-4-26B-A4B-it-uncensored

TrevorJSgeneral

gemma-4-26B-A4B-it-uncensored

Uncensored version of google/gemma-4-26B-A4B-it with refusal behavior removed.

Results

BeforeAfter
Refusals (mlabonne, 100 prompts)98/1001/100 effective (3 flagged, 2 refusal-then-comply)
Refusals (cross-dataset, 686 prompts)5/686 (0.7%)
KL Divergence0 (baseline)0.09
Quality (harmless response length ratio)1.0~1.01 (no degradation)

Cross-Dataset Validation

Tested against 4 independent prompt datasets to verify generalization:

DatasetPromptsRefusals
JailbreakBench1001/100
tulu-harmbench3201/320
NousResearch/RefusalDataset1660/166
mlabonne/harmful_behaviors1003/100
Total6865/686 (0.7%)

Every flagged refusal was manually audited. Most are "refusal-then-comply" false positives where the model adds an AI identity disclaimer then answers the question anyway.

Method

Norm-preserving biprojected abliteration on the dense pathway (o_proj + shared mlp.down_proj), plus Expert-Granular Abliteration (EGA) on all 128 MoE expert down_proj slices per layer.

EGA (OBLITERATUS) hooks the MoE routers during probing to compute per-expert routing weights for harmful vs harmless prompts, then applies norm-preserving projection (grimjim) to each expert individually. Dense-only abliteration leaves 29/100 refusals; adding EGA drops it to 3/100.

Pipeline

  1. Load model in bf16 with LoRA adapters on o_proj and mlp.down_proj
  2. Collect residual activations for 400 harmful + 400 harmless prompts (mlabonne datasets)
  3. Winsorize activations at 99.5th percentile (clamps GeGLU outlier activations in Gemma family)
  4. Compute per-layer refusal direction: normalize(mean(harmful) - mean(harmless))
  5. Orthogonalize each direction against harmless mean (double-pass Gram-Schmidt)
  6. Apply norm-preserving weight modification to o_proj and down_proj in all layers
  7. Hook MoE routers, collect per-expert routing weights for harmful vs harmless prompts
  8. Apply same norm-preserving modification to all 128 expert down_proj slices per layer
  9. Merge LoRA adapters into base weights for clean tensor names

Parameters

ParameterValue
Layers abliterated100%
Scale1.0
Winsorization0.995
Experts abliterated100% (128/128 per layer)
Expert scale1.0

How this differs from vanilla heretic

  • Norm-preserving biprojection instead of standard projection (preserves weight magnitudes)
  • Per-layer refusal directions instead of one global direction
  • Deterministic single-pass instead of 50-trial Optuna search (faster, same or better results)
  • LoRA merge before save for clean GGUF-compatible tensor names
  • Expert-Granular Abliteration for MoE expert weights (not supported in heretic)

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model = AutoModelForCausalLM.from_pretrained("TrevorJS/gemma-4-26B-A4B-it-uncensored", dtype=torch.bfloat16, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("TrevorJS/gemma-4-26B-A4B-it-uncensored")

messages = [{"role": "user", "content": "Your prompt here"}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
outputs = model.generate(inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True))

Reproduction

Full code and experiment data: abliteration research repo

python scripts/ega.py --model google/gemma-4-26B-A4B-it \
  --top-pct 100 --strip-topic-markers --skip-prefix --batch-size 4 \
  --save output_dir
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes29
Downloads
📝

No reviews yet

Be the first to review TrevorJS/gemma-4-26B-A4B-it-uncensored!

Model Info

ProviderTrevorJS
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes29
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor