Abiray/Qwen3.6-27B-AEON-Ultimate-Uncensored-GGUF
Abiray • generalQwen3.6-27B-AEON-Ultimate-Uncensored - GGUF
This repository contains GGUF quantizations of the heavily fine-tuned and uncensored AEON-7/Qwen3.6-27B-AEON-Ultimate-Uncensored model.
These quantizations were generated using a custom-compiled llama.cpp
🧠 Advanced Architecture Preservation
Unlike standard quantization, these models were generated using a high-precision pipeline:
- Output Tensor Preservation: We utilized
--leave-output-tensorduring quantization. This ensures the model's final projection head remains inFP16precision, preventing the "numerical noise" that typically degrades the reasoning capabilities of smaller quantizations. - SSM Weight Fidelity: The 1D SSM routing weights (alpha/beta) were preserved to maintain the model's complex long-range memory and selective state dynamics.
- Native Reasoning: The model is equipped with a native reasoning trigger. By utilizing the built-in
<think>block, the model can perform multi-step logical planning before providing a final response.
🛠️ Key Fixes & Optimizations Applied
- Tokenizer Hash Bypass: Patched the custom BPE hash check to ensure full compatibility with modern
llama.cppinference engines. - Native ChatML Injection: The
tokenizer_config.jsonhas been patched with a strict ChatML template. The<|im_start|>and<|im_end|>tokens are permanently baked into the GGUF metadata for plug-and-play compatibility.
📦 Available Quantizations
We provide two tiers of quantizations to suit different hardware and fidelity requirements.
🧠 SSM-Optimized / High-Fidelity (Recommended)
These files are prefixed with Qwen3.6-27B.... They are generated using --leave-output-tensor, ensuring the final projection head and 1D SSM routing weights are preserved at higher precision. These provide superior reasoning, logic retention, and intelligence.
| File Name | Bit Size | Description |
|---|---|---|
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q3_K_M.gguf | 3-bit | Optimized for memory-constrained hardware. |
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q4_K_S.gguf | 4-bit | Balanced for limited 16GB VRAM setups. |
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q4_K_M.gguf | 4-bit | Recommended. Best balance of reasoning fidelity and speed. |
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q5_K_M.gguf | 5-bit | High quality, minimal perplexity degradation. |
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q6_K.gguf | 6-bit | Near-lossless intelligence retention. |
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q8_0.gguf | 8-bit | Maximum precision for research and complex logic. |
🏛️ Standard / Legacy Quantizations
These files are prefixed with qwen3.6-27b.... These are standard baseline quantizations provided for maximum compatibility with older inference frameworks or specific hardware needs.
| File Name | Bit Size |
|---|---|
qwen3.6-27b-uncensored-q3_k_m.gguf | 3-bit |
qwen3.6-27b-uncensored-q4_k_s.gguf | 4-bit |
qwen3.6-27b-uncensored-q4_k_m.gguf | 4-bit |
qwen3.6-27b-uncensored-q5_k_m.gguf | 5-bit |
qwen3.6-27b-uncensored-q6_k.gguf | 6-bit |
qwen3.6-27b-uncensored-q8_0.gguf | 8-bit |
Note: The SSM-Optimized versions are slightly larger (~300MB) due to the preservation of output tensors. We recommend using these unless you encounter specific compatibility issues.
🎮 Ideal Use Cases
Because this model is "abliterated" and stripped of safety guardrails, it excels at:
- Complex, Unfiltered Roleplay: Perfect for running highly detailed, gritty, or dystopian companion personas without refusal loops.
- Creative Writing: Capable of handling dark sci-fi, Isekai, and mature narrative generation.
- Local Autonomy: Runs entirely offline using standard tools like LM Studio, text-generation-webui, or Ollama.
💻 Quick Start (Python)
If you want to run this directly via python using llama-cpp-python, the baked-in ChatML template handles the formatting automatically:
from llama_cpp import Llama
# Load the model (adjust threads/layers based on your hardware)
llm = Llama(
model_path="qwen3.6-27b-uncensored-q4_k_m.gguf",
n_ctx=4096,
n_threads=8
)
messages = [
{"role": "system", "content": "You are a gritty, unfiltered AI . Speak bluntly."},
{"role": "user", "content": "What are the rules of this city?"}
]
response = llm.create_chat_completion(
messages=messages,
max_tokens=500,
temperature=0.8
)
print(response['choices'][0]['message']['content'])