Back to Models
GestaltLabs logo

GestaltLabs/Ornstein-Hermes-3.6-27b-GGUF

GestaltLabsimage

Ornstein-hermes-3.6-27b

Ornstein-hermes-3.6-27b — GGUF Quantizations

GGUF quantizations of GestaltLabs/Ornstein-hermes-3.6-27b — a Hermes-format function-calling fine-tune of Ornstein-3.6-27B (Qwen 3.6 27B multimodal).

All K- and I-quants are calibrated with an imatrix computed from 800 high-quality Hermes-format tool-use conversations sampled from DJLougen/Acta-Synthetic — so the quantization gradients are tuned for tool-calling distributions, not generic web text.

Support This Work

I'm a PhD student in visual neuroscience at the University of Toronto who also happens to spend way too much time fine-tuning, merging, and quantizing open-weight models on rented H100s and a local DGX Spark. All training compute is self-funded — balancing GPU costs against a student budget. If my uploads have been useful to you, consider buying a PhD student a coffee. It goes a long way toward keeping these experiments running.

Support on Ko-fi


Available Quants

QuantBits/weightSizeNotes
Q8_0~8.526.6 GBNear-lossless. Use if you have ≥32 GB VRAM/RAM.
Q6_K~6.620.6 GBHigh fidelity, very small loss vs F16.
Q5_K_M~5.717.9 GBStrong default for ≥24 GB cards.
Q4_K_M~4.815.4 GBMost popular 4-bit; great quality/size tradeoff.
IQ4_NL~4.514.7 GBimatrix-aware non-linear 4-bit, smaller than Q4_K_M.
IQ4_XS~4.314.0 GBSmallest 4-bit; minor quality drop vs Q4_K_M.
Q3_K_M~3.912.4 GBAggressive but usable; ≥16 GB VRAM.
IQ3_M~3.711.7 GBimatrix 3-bit; better than Q3_K_M at similar size.
IQ2_M~2.79.3 GBTight VRAM budget; expect noticeable degradation.

Picking a quant

  • 24 GB GPU (e.g. RTX 3090/4090)Q4_K_M or IQ4_NL
  • 32 GB (e.g. RTX 5090)Q5_K_M
  • 48 GB (e.g. RTX A6000)Q6_K
  • 80 GB (H100/A100)Q8_0
  • CPU-only with 32 GB RAMIQ4_XS or Q3_K_M
  • 16 GB VRAMIQ3_M or IQ2_M

Usage

llama.cpp

./llama-cli -m Ornstein-hermes-3.6-27b-Q4_K_M.gguf \
  -ngl 999 \
  -c 8192 \
  --temp 0.7 \
  -p "<|im_start|>user\nWhat's the weather in Tokyo?<|im_end|>\n<|im_start|>assistant\n"

For tool calling, register tools via the --chat-template system prompt or use the OpenAI-compatible server (llama-server) which handles tool registration automatically.

Ollama

ollama create ornstein-hermes-q4 -f - <<EOF
FROM ./Ornstein-hermes-3.6-27b-Q4_K_M.gguf
TEMPLATE """{{- range .Messages }}<|im_start|>{{ .Role }}
{{ .Content }}<|im_end|>
{{ end }}<|im_start|>assistant
"""
PARAMETER stop "<|im_end|>"
EOF

ollama run ornstein-hermes-q4

LM Studio

  1. Download any GGUF from this repo
  2. Open in LM Studio (auto-detects Qwen3 chat template)
  3. Use the built-in tool-calling interface

Hermes Tool-Calling Format

The model was trained on Hermes-style function calling. Expected message flow:

<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags.
<tools>
[{"name": "get_weather", "description": "...", "parameters": {...}}]
</tools>
<|im_end|>
<|im_start|>user
What's the weather in Tokyo?<|im_end|>
<|im_start|>assistant
<think>The user wants weather info. I'll call get_weather.</think>
<tool_call>{"name": "get_weather", "arguments": {"city": "Tokyo"}}</tool_call><|im_end|>
<|im_start|>tool
<tool_response>{"temp_c": 18, "condition": "cloudy"}</tool_response><|im_end|>
<|im_start|>assistant
It's 18°C and cloudy in Tokyo.<|im_end|>

Quantization Details

SourceGestaltLabs/Ornstein-hermes-3.6-27b (bf16)
F16 GGUF size53.8 GB (851 tensors)
Toolllama.cpp (latest)
imatrix corpus800 conversations from DJLougen/Acta-Synthetic, passes_thresholds=True, rendered with the Qwen3.6 chat template (~385K tokens, 1.74 MB)
imatrix params--n-gpu-layers 999 -c 4096 -b 4096 --chunks 200
Hardware1× NVIDIA RTX PRO 6000 Blackwell

License

Apache 2.0 — inherited from Qwen 3.6 base.

Citation

If you use this model, please consider citing the dataset:

@dataset{lougen_acta_2026,
  author = {DJLougen},
  title = {Acta: A Premium Curated Sample of High-Quality Agentic Tool-Use Conversations},
  year = {2026},
  url = {https://huggingface.co/datasets/DJLougen/Acta}
}

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes7
Downloads
📝

No reviews yet

Be the first to review GestaltLabs/Ornstein-Hermes-3.6-27b-GGUF!

Model Info

ProviderGestaltLabs
Categoryimage
Reviews0
Avg. Rating / 5.0

Community

Likes7
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor