Back to Models
EV

EverMind-AI/MSA-4B

EverMind-AIgeneral

MSA-4B

EverMind

Highlights

Long-term memory is essential for general intelligence, yet the full attention bottleneck constrains most LLMs' effective context length to 128K–1M. Existing attempts,hybrid linear attention, fixed-size state memory (e.g., RNNs), and external storage like RAG/agents,either suffer rapid precision decay and latency growth at extreme scales, lack end-to-end differentiability or dynamic memory maintenance, or require complex pipelines. We present Memory Sparse Attention (MSA): an end-to-end trainable, scalable sparse latent-state memory framework. Core ideas include:

  • Scalable sparse attention + document-wise RoPE (parallel/global) achieving near-linear complexity in both training and inference;
  • KV cache compression with a Memory Parallel inference engine to deliver 100M token throughput on 2×A800 GPUs;
  • Memory Interleave for multi-round, multi-hop reasoning across scattered memory segments.

On long-context QA and NIAH (Needle-in-a-Haystack) benchmarks, MSA surpasses same-backbone RAG, best-of-breed RAG stacks, and leading long-context models. Across an unprecedented 16K→100M token range, MSA shows < 9% degradation, suggesting a practical path to decouple memory capacity from reasoning.

Scaling from 16K→100M tokens: MSA fuses top-k selection with sparse attention to remain end-to-end differentiable while allowing document decoupling at inference. On MS MARCO, MSA sustains <9% degradation and exhibits strong extrapolation. Some baseline curves end early due to their context limits.

Figure 1: Scaling curve 16K→100M tokens Figure 1: MSA scalability under extreme-long contexts

Model Overview

This model is based on Qwen3-4B-Instruct-2507 with Memory Sparse Attention (MSA).

  • Number of Parameters: 4.0B
  • Number of Layers: 36
  • Number of MSA Layers: 18
  • Number of Attention Heads (GQA): 32 for Q and 8 for KV
  • Based on Qwen/Qwen3-4B-Instruct-2507

For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our paper and GitHub.

Performance

Setup QA: 9 datasets (MS MARCO v1, NQ, DuReader, TriviaQA(10M), NarrativeQA, PopQA, 2WikiMultiHopQA, HotpotQA, MuSiQue), memory banks 277K→10M tokens, metric: LLM judge (0–5). NIAH (RULER): 8 subtasks, 32K→1M tokens, report average accuracy. Backbone: Qwen3‑4B‑Instruct‑2507. Compare to same-backbone RAG and best-of-breed RAG stacks (KaLMv2 + large generators, optional reranker).

Table 1: MSA vs same-backbone RAG (Qwen3‑4B)

Summary: Average 3.760, improving over standard RAG (+16.0%), RAG+rerank (+11.5%), and HippoRAG2 (+14.8%) using their best@k; MSA leads on all but NarrativeQA within the same-backbone group.

DatasetTokensQwen3-4B R@1R@5R@10Qwen3-4B (RR) R@1R@5R@10HippoRAG2 R@1R@5R@10MSA (adaptive)
MS MARCO v17.34M2.8933.0113.0052.9343.0323.0172.6763.0053.0194.141
Natural Questions1.47M3.4523.3743.2973.4943.4083.3853.3383.3893.3743.545
DuReader277K3.7263.5793.5943.8483.6183.6072.9413.4853.4154.155
TriviaQA (10M)10M4.1334.4144.2734.3134.3754.3914.1884.4304.3674.621
NarrativeQA538K1.6112.5672.8603.6383.4923.5361.9592.6282.6553.395
PopQA1.18M2.9593.2733.2993.3153.2643.2663.1113.2493.2493.433
2WikiMultiHopQA722K1.0653.0553.1361.1873.0573.1591.0453.1803.3304.280
HotpotQA1.35M2.2523.5823.7872.6423.9904.0223.2303.7703.9704.061
MuSiQue1.41M0.9361.7521.9281.1441.9601.9651.0201.9072.0952.211
Average2.5593.1793.2422.9463.3553.3722.6123.2273.2753.760

Table 1: Same-backbone RAG vs MSA (@1/@5/@10 vs MSA @adaptive)

Table 2: MSA vs best-of-breed RAG (large backbones)

Summary: Against KaLMv2+Qwen3‑235B and KaLMv2+Llama‑3.3‑70B (w/ and w/o reranking), MSA achieves the best score on 4/9 datasets and an average 3.760, with relative gains of +7.2%, +5.0%, +10.7%, and +5.4% over the strongest configurations respectively. Gaps on a few datasets (e.g., MuSiQue) are largely attributable to parameter-count and intrinsic reasoning capacity.

DatasetKaLMv2 + Qwen3‑235B R@1R@5R@10Qwen3‑235B (RR) R@1R@5R@10KaLMv2 + Llama‑3.3 R@1R@5R@10Llama‑3.3 (RR) R@1R@5R@10MSA (adaptive)
MS MARCO v12.8463.0283.0272.8863.0202.9952.6492.9042.9192.8812.9552.9524.141
Natural Questions3.7113.6703.6943.6213.6103.6453.6753.6743.6623.7563.6653.6473.545
DuReader4.0443.9913.9783.9733.9323.8914.0513.8463.7423.9673.7763.7804.155
TriviaQA (10M)4.3674.6564.5784.4924.3204.5554.2734.7404.7194.5474.7034.6954.621
NarrativeQA1.4132.1302.4273.2123.4273.3751.2902.1232.3823.1503.2633.3173.395
PopQA2.8103.3473.3963.2683.3803.3762.7873.2983.3053.3373.3843.3623.433
2WikiMultiHopQA2.6463.5793.5821.8553.3813.5831.3393.2633.4451.6513.3323.5414.280
HotpotQA3.4974.0904.2253.3414.1414.1943.0703.8964.1273.4284.1454.2034.061
MuSiQue1.9882.4622.6471.8012.5222.6051.7042.3172.2581.8952.4622.6142.211
Average3.0363.4393.5063.1613.5263.5802.7603.3403.3963.1793.5213.5683.760

Table 2: SOTA RAG stacks (strong retriever + large generator + optional reranker) vs MSA

Quick Start

1. Download code from our Github and install dependencies

git clone https://github.com/EverMind-AI/MSA

conda create -n msa python=3.12 -y
conda activate msa

pip install -r requirements.txt
pip install flash-attn==2.7.4.post1 --no-build-isolation

2. Download model

mkdir ckpt
pip install -U huggingface_hub
export HF_ENDPOINT=https://hf-mirror.com
huggingface-cli download --resume-download EverMind-AI/MSA-4B --local-dir ckpt/MSA-4B

3. Download benchmarks

Benchmark data is hosted on EverMind-AI/MSA-RAG-BENCHMARKS and will be automatically downloaded to data/ on first run, based on the benchmarks specified in scripts/run_benchmarks.sh. No manual download is needed.

4. Run inference on benchmarks

bash scripts/run_benchmarks.sh eval_benchmark

5. Compute LLM-based scores

bash scripts/calculate_llm_score.sh eval_benchmark

Supported Benchmarks

CategoryBenchmark
Multi-hop QA2wikimultihopqa, hotpotqa, musique
Single-hop QAnature_questions, triviaqa_06M, triviaqa_10M, msmarco_v1, dureader, ms_100M, hipporag_narrative, hipporag_popqa

Citation

If you find our work helpful, feel free to give us a cite.

@misc{chen2026msamemorysparseattention,
      title={MSA: Memory Sparse Attention for Efficient End-to-End Memory Model Scaling to 100M Tokens},
      author={Yu Chen and Runkai Chen and Sheng Yi and Xinda Zhao and Xiaohong Li and Jianjin Zhang and Jun Sun and Chuanrui Hu and Yunyun Han and Lidong Bing and Yafeng Deng and Tianqiao Chen},
      year={2026},
      eprint={2603.23516},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2603.23516},
}

Acknowledgments

This model are maintained by the MSA authors. For project updates, please visit the Homepage: https://evermind.ai/

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes16
Downloads
📝

No reviews yet

Be the first to review EverMind-AI/MSA-4B!

Model Info

ProviderEverMind-AI
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes16
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor