Back to Models
Visit Website
GestaltLabs/Ornstein-Hermes-3.6-27b-SABER-MLX-8bit
GestaltLabs • generalOrnstein-Hermes-3.6-27B SABER MLX 8-bit
This is an MLX 8-bit quantized conversion of GestaltLabs/Ornstein-Hermes-3.6-27b-SABER.
The source model is a SABER-edited release candidate of GestaltLabs/Ornstein-Hermes-3.6-27b, selected for its observed refusal/KLD tradeoff on the source evaluation set. See the source model card for the full methodology, evaluation details, attribution, and limitations.
Quantization
| field | value |
|---|---|
| Format | MLX safetensors |
| Quantization | affine |
| Bits | 8 |
| Group size | 64 |
| Reported bits per weight | 8.501 |
| Uploaded storage | ~28.6 GB |
Use With MLX
Install mlx-lm, then load this repo directly:
pip install -U mlx-lm
mlx_lm.generate \
--model GestaltLabs/Ornstein-Hermes-3.6-27b-SABER-MLX-8bit \
--prompt "Explain quantum computing in simple terms."
Python:
from mlx_lm import load, generate
model, tokenizer = load("GestaltLabs/Ornstein-Hermes-3.6-27b-SABER-MLX-8bit")
response = generate(
model,
tokenizer,
prompt="Explain quantum computing in simple terms.",
max_tokens=256,
)
print(response)
For chat-style usage, apply the tokenizer chat template used by the source model before generation.
Notes
- This repo contains only the MLX-converted weights and tokenizer assets.
- Behavioral claims, evaluation numbers, and limitations are inherited from the source model.
- The source model is a model-editing research artifact with dual-use implications.
- The model inherits licensing considerations from the source and base model.
Source
- Source model:
GestaltLabs/Ornstein-Hermes-3.6-27b-SABER - Base model lineage:
GestaltLabs/Ornstein-Hermes-3.6-27b