Back to Models
unsloth logo

unsloth/granite-4.1-30b-GGUF

unslothgeneral

[!NOTE] Includes Unsloth chat template fixes!
For llama.cpp, use --jinja

Unsloth Dynamic 2.0 achieves superior accuracy & outperforms other leading quants.

mof-class3-qualified

Granite-4.1-30B

Model Summary: Granite-4.1-30B is a 30B parameter long-context instruct model finetuned from Granite-4.1-30B-Base using a combination of open source instruction datasets with permissive license and internally collected synthetic datasets. Granite 4.1 models have gone through an improved post-training pipeline, including supervised finetuning and reinforcement learning alignment, resulting in enhanced tool calling, instruction following, and chat capabilities.

Supported Languages: English, German, Spanish, French, Japanese, Portuguese, Arabic, Czech, Italian, Korean, Dutch, and Chinese. Users may finetune Granite 4.1 models for languages beyond these languages.

Intended use: The model is designed to follow general instructions and can serve as the foundation for AI assistants across diverse domains, including business applications, as well as for LLM agents equipped with tool-use capabilities.

Capabilities

  • Summarization
  • Text classification
  • Text extraction
  • Question-answering
  • Retrieval Augmented Generation (RAG)
  • Code related tasks
  • Function-calling tasks
  • Multilingual dialog use cases
  • Fill-In-the-Middle (FIM) code completions

Generation: This is a simple example of how to use Granite-4.1-30B model.

Install the following libraries:

pip install torch torchvision torchaudio
pip install accelerate
pip install transformers

Then, copy the snippet from the section that is relevant for your use case.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"
model_path = "ibm-granite/granite-4.1-30b"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
    { "role": "user", "content": "Please list one IBM Research laboratory located in the United States. You should only output its name and location." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens, 
                        max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output[0])

Expected output:

<|start_of_role|>user<|end_of_role|>Please list one IBM Research laboratory located in the United States. You should only output its name and location.<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>IBM Research - Almaden, San Jose, California<|end_of_text|>

Tool-calling: Granite-4.1-30B comes with enhanced tool calling capabilities, enabling seamless integration with external functions and APIs. To define a list of tools please follow OpenAI's function definition schema.

This is an example of how to use Granite-4.1-30B model tool-calling ability:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda"
model_path = "ibm-granite/granite-4.1-30b"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather for a specified city.",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "Name of the city"
                    }
                },
                "required": ["city"]
            }
        }
    }
]

# change input text as desired
chat = [
    { "role": "user", "content": "What's the weather like in Boston right now?" },
]
chat = tokenizer.apply_chat_template(chat, \
                                     tokenize=False, \
                                     tools=tools, \
                                     add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt").to(device)
# generate output tokens
output = model.generate(**input_tokens, 
                        max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# print output
print(output[0])

Expected output:

<|start_of_role|>system<|end_of_role|>You are a helpful assistant with access to the following tools. You may call one or more tools to assist with the user query.

You are provided with function signatures within <tools></tools> XML tags:
- <tools>
- unsloth
{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather for a specified city.", "parameters": {"type": "object", "properties": {"city": {"type": "string", "description": "Name of the city"}}, "required": ["city"]}}}
</tools>

For each tool call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
- <tool_call>
- unsloth
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request.<|end_of_text|>
<|start_of_role|>user<|end_of_role|>What's the weather like in Boston right now?<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|><tool_call>
{"name": "get_current_weather", "arguments": {"city": "Boston"}}
</tool_call><|end_of_text|>

Evaluation Results:

BenchmarksMetric3B Dense8B Dense30B Dense
General Tasks
MMLU5-shot67.0273.8480.16
MMLU-Pro5-shot, CoT49.8355.9964.09
BBH3-shot, CoT75.8380.5183.74
AGI EVAL0-shot, CoT65.1672.4377.80
GPQA0-shot, CoT31.7041.9645.76
SimpleQA3.684.826.81
Alignment Tasks
AlpacaEval 2.038.5750.0856.16
IFEval Avg82.3087.0689.65
ArenaHard37.8068.9871.02
MTBench Avg7.578.618.61
Math Tasks
GSM8K8-shot86.8892.4994.16
GSM Symbolic8-shot81.3283.7075.70
Minerva Math0-shot, CoT67.9480.1081.32
DeepMind Math0-shot, CoT64.6480.0781.93
Code Tasks
HumanEvalpass@181.7185.3788.41
HumanEval+pass@176.8379.8885.37
MBPPpass@171.1687.3085.45
MBPP+pass@162.1773.8173.54
CRUXEval-Opass@140.7547.6355.75
BigCodeBenchpass@132.1935.0038.77
MULTIPLEpass@152.5460.2662.31
Eval+ Avgpass@167.0580.2182.66
Tool Calling Tasks
BFCL v360.8068.2773.68
Multilingual Tasks
MMMLU5-shot57.6164.8473.71
INCLUDE5-shot52.0558.8967.26
MGSM8-shot70.0082.3271.12
Safety
SALAD-Bench93.9595.8096.41
AttaQ81.8881.1985.76
Tulu3 Safety Eval Avg66.8475.5778.19
Multilingual Benchmarks and the included languages:
Benchmarks# LangsLanguages
MMMLU11ar, de, en, es, fr, ja, ko, pt, zh, bn, hi
INCLUDE14hi, bn, ta, te, ar, de, es, fr, it, ja, ko, nl, pt, zh
MGSM5en, es, fr, ja, zh

Model Architecture:

Granite-4.1-30B baseline is built on a decoder-only dense transformer architecture. Core components of this architecture are: GQA, RoPE, MLP with SwiGLU, RMSNorm, and shared input/output embeddings.

Model3B Dense8B Dense30B Dense
Embedding size256040964096
Number of layers404064
Attention head size64128128
Number of attention heads403232
Number of KV heads888
MLP / Shared expert hidden size81921280032768
MLP activationSwiGLUSwiGLUSwiGLU
Sequence length131072131072131072
Position embeddingRoPERoPERoPE
# Parameters3B8B30B

Training Data: Overall, our SFT data is largely comprised of three key sources: (1) publicly available datasets with permissive license, (2) internal synthetic data targeting specific capabilities, and (3) a select set of human-curated data.

Supervised Fine-Tuning and Reinforcement Learning: Instruct model has been fine tuned with significantly improved SFT-pipeline and Reinforcement learning pipelines with high quality mix of various datasets as mentioned above. With rigorous SFT-RL cycles we have improved Granite-4.1 model's tool calling, instruction following and chat capabilities. For further details please check our Granite-4.1 Blog.

Infrastructure: We trained the Granite 4.1 Language Models utilizing an NVIDIA GB200 NVL72 cluster hosted in CoreWeave. Intra-rack communication occurs via the 72-GPU NVLink domain, and a non-blocking, full Fat-Tree NDR 400 Gb/s InfiniBand network provides inter-rack communication. This cluster provides a scalable and efficient infrastructure for training our models over thousands of GPUs.

Ethical Considerations and Limitations: Granite 4.1 Instruction Models are primarily finetuned using instruction-response pairs mostly in English, but also multilingual data covering multiple languages. Although this model can handle multilingual dialog use cases, its performance might not be similar to English tasks. In such cases, introducing a small number of examples (few-shot) can help the model in generating more accurate outputs. While this model has been aligned by keeping safety in consideration, the model may in some cases produce inaccurate, biased, or unsafe responses to user prompts. We urge the community to use this model with proper safety testing and tuning tailored for their specific tasks. To enhance safety in enterprise deployments, we recommend using Granite 4.1 Language models alongside Granite Guardian, a model designed to detect and flag risks in inputs and outputs across key dimensions outlined in the IBM AI Risk Atlas.

Resources

Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes5
Downloads
📝

No reviews yet

Be the first to review unsloth/granite-4.1-30b-GGUF!

Model Info

Providerunsloth
Categorygeneral
Reviews0
Avg. Rating / 5.0

Community

Likes5
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor