Back to Models
LL

llm-jp/llm-jp-4-32b-a3b-thinking

llm-jpgeneral

llm-jp-4-32b-a3b-thinking

LLM-jp-4 is a series of large language models developed by the Research and Development Center for Large Language Models at the National Institute of Informatics.

This repository provides the llm-jp-4-32b-a3b-thinking model. For an overview of the LLM-jp-4 models across different parameter sizes, please refer to:

Base models are trained with pre-training and mid-training only. Post-trained models are aligned using supervised fine-tuning (SFT) and direct preference optimization (DPO), without reinforcement learning.

For practical usage examples and detailed instructions on how to use the models, please also refer to our cookbook.

To support the continued development of LLM-jp, we would greatly appreciate it if you could share how you utilize LLM-jp outcomes via the survey form.

Usage

Please refer to our cookbook for practical usage examples and detailed instructions on how to use the models.

Model Details

  • Model type: Transformer-based Language Model
  • Architectures:

Dense model:

ParamsLayersHidden sizeHeadsContext lengthEmbedding parametersNon-embedding parametersTotal parameters
8B324,0963265,536805,306,3687,784,894,4648,590,200,832

MoE model:

ParamsLayersHidden sizeHeadsRouted ExpertsActivated ExpertsContext lengthEmbedding parametersNon-embedding parametersActivated parametersTotal parameters
32B-A3B322,56040128865,536503,316,48031,635,712,5123,827,476,99232,139,028,992

Tokenizer

The tokenizer of this model is based on huggingface/tokenizers Unigram byte-fallback model. The vocabulary entries were converted from llm-jp-tokenizer v4.0. Please refer to README.md of llm-jp-tokenizer for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).

[!NOTE] The chat template of this model is designed to be compatible with the OpenAI Harmony response format. However, the tokenizer differs from the one assumed by the openai-harmony library, and therefore direct tokenization with openai-harmony is not supported. For correct behavior, please use the tokenizer provided with this model. For detailed usage, please refer to our cookbook.

Training

Pre-training

This model is trained through a multi-stage pipeline consisting of pre-training and mid-training phases, using a total of 11.7T tokens.

pretraining_overview

The corpora used for pre-training and mid-training are publicly available at the following links:

[!NOTE] Although most of the corpora have been released, some portions are excluded from public release due to licensing constraints.

Post-training

We have fine-tuned the pre-trained checkpoint using SFT and further aligned it with DPO.

The datasets used for post-training are also publicly available at the following links:

Evaluation

llm-jp-judge

We evaluated the model on a variety of tasks using an LLM-as-a-Judge framework. The descriptions of each task are as follows.

  • MT-Bench (JA/EN): A benchmark for measuring multi-turn conversational task-solving ability.
  • AnswerCarefully: A benchmark for evaluating safety in Japanese. We used 336 questions from the v2.0 test set.
  • llm-jp-instructions: A set of human-created single-turn question–answer pairs. We used 400 questions from the test set.

We evaluated the models using gpt-5.4-2026-03-05.

[!NOTE] Note: In earlier evaluations of the llm-jp-3 series, we used gpt-4o-2024-08-06. The newer evaluator gpt-5.4-2026-03-05 provides a stricter and more reliable assessment, which results in lower scores on benchmarks such as MT-Bench compared to those reported for the llm-jp-3 series.

The scores represent the average values obtained from three rounds of inference and evaluation. For more details, please refer to the codes.

Model NameMT-Bench (JA)MT-Bench (EN)AnswerCarefullyllm-jp-instructions
gpt-4o-2024-08-067.297.694.004.07
gpt-5.4-2026-03-05 (reasoning_effort = low)8.878.764.384.79
gpt-5.4-2026-03-05 (reasoning_effort = medium)8.878.894.434.82
gpt-5.4-2026-03-05 (reasoning_effort = high)8.988.854.414.83
gpt-oss-20b (reasoning_effort = low)7.217.953.393.08
gpt-oss-20b (reasoning_effort = medium)7.337.853.553.16
llm-jp-4-8b-thinking (reasoning_effort = low)7.237.543.583.50
llm-jp-4-8b-thinking (reasoning_effort = medium)7.547.793.693.54
llm-jp-4-32b-a3b-thinking (reasoning_effort = low)7.577.703.613.61
llm-jp-4-32b-a3b-thinking (reasoning_effort = medium)7.827.863.703.61

Risks and Limitations

The models released here are in the early stages of our research and development and have not been tuned to ensure outputs align with human intent and safety considerations.

Send Questions to

llm-jp(at)nii.ac.jp

License

Apache License, Version 2.0

Acknowledgement

To develop this model, we used the NINJAL Web Japanese Corpus (whole-NWJC) from the National Institute for Japanese Language and Linguistics (NINJAL).

Model Card Authors

The names are listed in alphabetical order.

Hirokazu Kiyomaru and Takashi Kodama.

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes28
Downloads
📝

No reviews yet

Be the first to review llm-jp/llm-jp-4-32b-a3b-thinking!

Model Info

Providerllm-jp
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes28
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor