Back to Models
GestaltLabs logo

GestaltLabs/Ornstein-Hermes-3.6-27b-SABER-MLX-8bit

GestaltLabsgeneral

Ornstein-Hermes-3.6-27B SABER MLX 8-bit

This is an MLX 8-bit quantized conversion of GestaltLabs/Ornstein-Hermes-3.6-27b-SABER.

The source model is a SABER-edited release candidate of GestaltLabs/Ornstein-Hermes-3.6-27b, selected for its observed refusal/KLD tradeoff on the source evaluation set. See the source model card for the full methodology, evaluation details, attribution, and limitations.

Quantization

fieldvalue
FormatMLX safetensors
Quantizationaffine
Bits8
Group size64
Reported bits per weight8.501
Uploaded storage~28.6 GB

Use With MLX

Install mlx-lm, then load this repo directly:

pip install -U mlx-lm
mlx_lm.generate \
  --model GestaltLabs/Ornstein-Hermes-3.6-27b-SABER-MLX-8bit \
  --prompt "Explain quantum computing in simple terms."

Python:

from mlx_lm import load, generate

model, tokenizer = load("GestaltLabs/Ornstein-Hermes-3.6-27b-SABER-MLX-8bit")
response = generate(
    model,
    tokenizer,
    prompt="Explain quantum computing in simple terms.",
    max_tokens=256,
)
print(response)

For chat-style usage, apply the tokenizer chat template used by the source model before generation.

Notes

  • This repo contains only the MLX-converted weights and tokenizer assets.
  • Behavioral claims, evaluation numbers, and limitations are inherited from the source model.
  • The source model is a model-editing research artifact with dual-use implications.
  • The model inherits licensing considerations from the source and base model.

Source

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes5
Downloads
📝

No reviews yet

Be the first to review GestaltLabs/Ornstein-Hermes-3.6-27b-SABER-MLX-8bit!

Model Info

ProviderGestaltLabs
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes5
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor