Back to Models
CO

CompactAI-O/TMLM-Haiku-2.3

CompactAI-Ogeneral

Note: You must use the custom python script to run this model properly, you can download it from here by going into the downloads option and scrolling down.

TMLM-Haiku-2.3

It speaks. It actually speaks. Mostly.

We have come so far. From the dark ages of couldcouldoldbloodblood to actual, coherent, structured sentences. This is TMLM-Haiku-2.3. It is 1 million parameters. It is small. It is trying its best. And unlike its ancestors, it usually succeeds.

Quick Stats

  • Parameters: 1,000,000 (Yes, really. 1M.)
  • Training Tokens: 10 Billion
  • Context Window: 2048 tokens
  • Vibe: Chaotic good, but mostly good.

What Is This?

Haiku-2.3 is the latest evolution of the TMLM-Haiku series. It builds on Haiku-2 by adding SPIN (Self-Play Fine-Tuning) to the training loop. This model represents a 3x improvement in combined performance score over the original Haiku. Coherence has jumped from 1.99 to 6.03. Relevance is no longer zero. It is a miracle.

The Journey

ModelEraTypical OutputCombined Score
Haiku-1The Dark Agescouldcouldoldbloodbloodbodybody1.62
Haiku-1.3The Pipe Character Incident|fdish|||||!@|1.21
Haiku-2The AwakeningIt is about **competent development**...3.87
Haiku-2.3 (SPIN)Current EraThe artificial intelligence is a problem...4.84 ★

Expected Output:

"The simple terms arrived in simulant explorers and honey are specific or forecasters. They allow the structure of their similar..."

Disclaimer

This is a 1 million parameter model.

  • It is not GPT-5.
  • It is not GPT-2.
  • It is a tiny neural network running on a prayer and a GPU.
  • It might still output chuamliamce occasionally. If it does, just try again. It is shy.
  • For best results, use temperature around 0.7. If you crank it to 2.0, you are on your own.

Benchmarks

We benchmarked Haiku-2.3 against all previous versions using a standard 7-question suite.

MetricHaiku-1Haiku-1.3Haiku-2Haiku-2.3 (SPIN)
Fluency0.501.698.358.78
Coherence1.991.565.726.03
Relevance1.220.000.002.25
Format3.293.293.293.29
Combined1.621.213.874.84

Related Models

Check out the rest of the family:

Acknowledgments

Built with curiosity over compute. Trained on FineWeb-Edu. SPIN optimized. And a lot of hope.


Built by CompactAI. If you like tiny models that try their best, give us a follow.

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes12
Downloads
📝

No reviews yet

Be the first to review CompactAI-O/TMLM-Haiku-2.3!

Model Info

ProviderCompactAI-O
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes12
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor