Back to Models
NO

NorwAI/NorwAI-Magistral-24B-reasoning

NorwAIgeneral

Please note that access is limited to students, companies, and organizations from Nordic countries. Kindly provide your work email or student email to access the models. Thank you for your understanding.


Model Details

NorwAI-Magistral-24B-reasoning is an efficient reasoning language model, continue-pretrained on Magistral from Mistral AI. It belongs to NowAI LLM family developed by NowAI research center at Norwegian University of Science and Technology (NTNU) in collaboration with Schibsted, NRK, VG and the National Library of Norway.

The model is designed to adapt its reasoning depth dynamically based on the type and complexity of the user’s question:

  • Completion mode for straightforward answers without reasoning
  • Short-thinking mode for moderately difficult questions requiring some reasoning
  • Long-thinking mode for more complex questions requiring deeper reasoning

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM
model_and_tokenizer_path = "NorwAI/NorwAI-Magistral-24B-reasoning"
access_token = "<your access token>"
# import tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_and_tokenizer_path, token=access_token)
model = AutoModelForCausalLM.from_pretrained(model_and_tokenizer_path, token=access_token, device_map='balanced')
# define your own prompt
messages = [
    {"role" : "user", "content" : "Gitt symptomene på plutselig svakhet i venstre arm og ben, nylig langdistansereise og tilstedeværelsen av hovent og ømt høyre legg, hvilken spesifikk hjerteavvik er mest sannsynlig å finne ved videre evaluering som kan forklare disse funnene?"}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True, # Must add for generation
)
# generate response
from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    max_new_tokens = 2048, # Increase for longer outputs!
    temperature = 0.7, top_p = 0.95,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)

Model Card Contact

Please contact the following people if you have any questions regarding the models:

Lemei Zhang, lemei.zhang@ntnu.no
Peng Liu, peng.liu@ntnu.no

Citation Information

If you feel our work is helpful, please cite our papers:

@article{gulla2026norwai,
  title={NorwAI's Large Language Models: Technical Report},
  author={Gulla, Jon Atle and Liu, Peng and Zhang, Lemei},
  journal={arXiv preprint arXiv:2601.03034},
  year={2026}
}

@inproceedings{liu2024nlebench+,
  title={NLEBench+NorGLM: A Comprehensive Empirical Analysis and Benchmark Dataset for Generative Language Models in Norwegian},
  author={Liu, Peng and Zhang, Lemei and Farup, Terje and Lauvrak, Even and Ingvaldsen, Jon and Eide, Simen and Gulla, Jon Atle and Yang, Zhirong},
  booktitle={Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing},
  pages={5543--5560},
  year={2024}
}
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes12
Downloads
📝

No reviews yet

Be the first to review NorwAI/NorwAI-Magistral-24B-reasoning!

Model Info

ProviderNorwAI
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes12
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor