Back to Models
AB

Abiray/Qwen3.6-27B-AEON-Ultimate-Uncensored-GGUF

Abiraygeneral

Qwen3.6-27B-AEON-Ultimate-Uncensored - GGUF

This repository contains GGUF quantizations of the heavily fine-tuned and uncensored AEON-7/Qwen3.6-27B-AEON-Ultimate-Uncensored model.

These quantizations were generated using a custom-compiled llama.cpp

🧠 Advanced Architecture Preservation

Unlike standard quantization, these models were generated using a high-precision pipeline:

  • Output Tensor Preservation: We utilized --leave-output-tensor during quantization. This ensures the model's final projection head remains in FP16 precision, preventing the "numerical noise" that typically degrades the reasoning capabilities of smaller quantizations.
  • SSM Weight Fidelity: The 1D SSM routing weights (alpha/beta) were preserved to maintain the model's complex long-range memory and selective state dynamics.
  • Native Reasoning: The model is equipped with a native reasoning trigger. By utilizing the built-in <think> block, the model can perform multi-step logical planning before providing a final response.

🛠️ Key Fixes & Optimizations Applied

  • Tokenizer Hash Bypass: Patched the custom BPE hash check to ensure full compatibility with modern llama.cpp inference engines.
  • Native ChatML Injection: The tokenizer_config.json has been patched with a strict ChatML template. The <|im_start|> and <|im_end|> tokens are permanently baked into the GGUF metadata for plug-and-play compatibility.

📦 Available Quantizations

We provide two tiers of quantizations to suit different hardware and fidelity requirements.

🧠 SSM-Optimized / High-Fidelity (Recommended)

These files are prefixed with Qwen3.6-27B.... They are generated using --leave-output-tensor, ensuring the final projection head and 1D SSM routing weights are preserved at higher precision. These provide superior reasoning, logic retention, and intelligence.

File NameBit SizeDescription
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q3_K_M.gguf3-bitOptimized for memory-constrained hardware.
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q4_K_S.gguf4-bitBalanced for limited 16GB VRAM setups.
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q4_K_M.gguf4-bitRecommended. Best balance of reasoning fidelity and speed.
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q5_K_M.gguf5-bitHigh quality, minimal perplexity degradation.
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q6_K.gguf6-bitNear-lossless intelligence retention.
Qwen3.6-27B-AEON-Ultimate-Uncensored-Q8_0.gguf8-bitMaximum precision for research and complex logic.

🏛️ Standard / Legacy Quantizations

These files are prefixed with qwen3.6-27b.... These are standard baseline quantizations provided for maximum compatibility with older inference frameworks or specific hardware needs.

File NameBit Size
qwen3.6-27b-uncensored-q3_k_m.gguf3-bit
qwen3.6-27b-uncensored-q4_k_s.gguf4-bit
qwen3.6-27b-uncensored-q4_k_m.gguf4-bit
qwen3.6-27b-uncensored-q5_k_m.gguf5-bit
qwen3.6-27b-uncensored-q6_k.gguf6-bit
qwen3.6-27b-uncensored-q8_0.gguf8-bit

Note: The SSM-Optimized versions are slightly larger (~300MB) due to the preservation of output tensors. We recommend using these unless you encounter specific compatibility issues.

🎮 Ideal Use Cases

Because this model is "abliterated" and stripped of safety guardrails, it excels at:

  • Complex, Unfiltered Roleplay: Perfect for running highly detailed, gritty, or dystopian companion personas without refusal loops.
  • Creative Writing: Capable of handling dark sci-fi, Isekai, and mature narrative generation.
  • Local Autonomy: Runs entirely offline using standard tools like LM Studio, text-generation-webui, or Ollama.

💻 Quick Start (Python)

If you want to run this directly via python using llama-cpp-python, the baked-in ChatML template handles the formatting automatically:

from llama_cpp import Llama

# Load the model (adjust threads/layers based on your hardware)
llm = Llama(
    model_path="qwen3.6-27b-uncensored-q4_k_m.gguf",
    n_ctx=4096,
    n_threads=8
)

messages = [
    {"role": "system", "content": "You are a gritty, unfiltered AI . Speak bluntly."},
    {"role": "user", "content": "What are the rules of this city?"}
]

response = llm.create_chat_completion(
    messages=messages,
    max_tokens=500,
    temperature=0.8
)

print(response['choices'][0]['message']['content'])
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes11
Downloads
📝

No reviews yet

Be the first to review Abiray/Qwen3.6-27B-AEON-Ultimate-Uncensored-GGUF!

Model Info

ProviderAbiray
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes11
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor