Back to Models
PleIAs logo

PleIAs/CommonLingua

PleIAsgeneral

CommonLingua

CommonLingua is a 2.35 million-parameters language identification model trained on 2,482,568 paragraphs from Structured Wikipedia and Common Corpus trained by Pleias in partnership with the GSMA's "AI Language Models in Africa, by Africa, for Africa" initiative. As of 2026, CommonLingua is the best performing model on the CommonLID benchmark with significant gains over the previous baseline.

CommonLingua is based on a byte-level hybrid architecture combining three conv1D layers with an attention layer. It was originally designed for large scale classification of pretraining data and intently trained on diverse data sources, especially realistic documents with OCR errors as well as a a particular focus on the long tail — 61 African languages are supported, including languages with almost no coverage.

Since CommonLingua is trained exclusively on open data under free license, we release the extent original dataset with detailed licensing contribution.

Architecture

CommonLingua uses a new original architecture, optimized for task accuracy in an extremely small model size range.

Main features:

  • No tokenizer. The model operates directly on raw UTF-8 bytes, padded to 512. This makes it inherently script-agnostic — Latin, Arabic, Ethiopic, N'Ko, Tifinagh, Devanagari, CJK, all handled by the same byte stream.
  • Trigram hash embedding: a polynomial rolling hash of byte 3-grams indexes a 4096-bucket embedding table. Hash collisions act as regularisation. Our ablations showed the added signal improved macro F1 by +1.2 points over the non-gram baseline.
  • Causal Conv1D × 3 captures local byte patterns (script ranges, common digraphs, morpheme boundaries).
  • Bidirectional attention × 1 with RoPE captures global structure across the paragraph.

Evaluation

We evaluated CommonLingua on CommonLID (Ortiz Suárez et al. 2026): 376 k held-out paragraphs, 200+ languages. All baselines are re-evaluated through the same pipeline (iso639-lang normalisation, equivalence-class collapsing applied identically) for an apples-to-apples comparison.

ModelParamsLabelsStrict accEquiv accMacro F1
OpenLID v2~600 M20055.77 %70.19 %0.6390
fastText-218 (NLLB)~600 M21859.53 %71.64 %0.6590
GlotLID v3~600 M2 10257.69 %71.26 %0.6729
CommonLingua2.35 M33477.63 %82.92 %0.7879

CommonLingua reaches +11.5 macro F1 over the next best baseline. We discarded Lingala from our evaluation since most samples from CommonLID turned out to belong to other close languages.

Throughput

We evaluated CommonLingua in texts/sec (one paragraph = one text, ≤ 512 bytes input, padded).

DeviceSettingfp32bf16bf16 vs fp32
H100 80GB (bs=4096)best10,96226,2362.4×
H100 80GB (bs=1024)10,89226,1302.4×
H100 80GB (bs=256)10,67725,2412.4×
H100 80GB (bs=64)low-latency10,02522,6252.3×
Sapphire Rapids CPU (8 threads)bs=321835533.0×
Sapphire Rapids CPU (1 thread)bs=32441142.6×

Inference

Easiest way to test the model is to test the provided predict.py script:

python predict.py "Wikipédia est une encyclopédie universelle, multilingue."  # fra 0.99

The intended workload is paragraph-level corpus curation. CommonLingua was not assessed on very short text segments and will likely perform less well than alternatives.

Citation

@misc{commonlingua,
  author = {{PleIAs}},
  title  = {CommonLingua: Byte-level Language Identification for 334 Languages},
  year   = {2026},
  url    = {https://huggingface.co/PleIAs/CommonLingua}
}
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes17
Downloads
📝

No reviews yet

Be the first to review PleIAs/CommonLingua!

Model Info

ProviderPleIAs
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes17
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor