Back to Models
LI

LiquidAI/LFM2.5-VL-450M

LiquidAIimage
Liquid AI
Try LFMDocsLEAPDiscord

LFM2.5‑VL-450M

LFM2.5‑VL-450M is Liquid AI's refreshed version of the first vision-language model, LFM2-VL-450M, built on an updated backbone LFM2.5-350M and tuned for stronger real-world performance. Find more about LFM2.5 family of models in our blog post.

  • Enhanced instruction following on vision and language tasks.
  • Improved multilingual vision understanding in Arabic, Chinese, French, German, Japanese, Korean, Portuguese and Spanish.
  • Bounding box prediction and object detection for grounded visual understanding.
  • Function calling support for text-only input.

🎥⚡️ You can try LFM2.5-VL-450M running locally in your browser with our real-time video stream captioning WebGPU demo 🎥⚡️

Alternatively, try the API model on the Playground.

📄 Model details

LFM2.5-VL-450M is a general-purpose vision-language model with the following features:

  • LM Backbone: LFM2.5-350M
  • Vision encoder: SigLIP2 NaFlex shape‑optimized 86M
  • Context length: 32,768 tokens
  • Vocabulary size: 65,536
  • Languages: English, Arabic, Chinese, French, German, Japanese, Korean, Portuguese, and Spanish
  • Native resolution processing: handles images up to 512*512 pixels without upscaling and preserves non-standard aspect ratios without distortion
  • Tiling strategy: splits large images into non-overlapping 512×512 patches and includes thumbnail encoding for global context
  • Inference-time flexibility: user-tunable maximum image tokens and tile count for speed/quality tradeoff without retraining
  • Generation parameters:
    • text: temperature=0.1, min_p=0.15, repetition_penalty=1.05
    • vision: min_image_tokens=32 max_image_tokens=256, do_image_splitting=True
ModelDescription
LFM2.5-VL-450MOriginal model checkpoint in native format. Best for fine-tuning or inference with Transformers and vLLM.
LFM2.5-VL-450M-GGUFQuantized format for llama.cpp and compatible tools. Optimized for CPU inference and local deployment with reduced memory usage.
LFM2.5-VL-450M-ONNXONNX Runtime format for cross-platform deployment. Enables hardware-accelerated inference across diverse environments (cloud, edge, mobile).
LFM2.5-VL-450M-MLX-8bitMLX format for Apple Silicon. Optimized for fast on-device inference on Mac with mlx-vlm. Also available in 4bit, 5bit, 6bit, and bf16.

We recommend using it for general vision-language workloads, captioning and object detection. It’s not well-suited for knowledge-intensive tasks or fine-grained OCR.

Chat Template

LFM2.5-VL uses a ChatML-like format. See the Chat Template documentation for details.

<|startoftext|><|im_start|>system
You are a helpful multimodal assistant by Liquid AI.<|im_end|>
<|im_start|>user
<image>Describe this image.<|im_end|>
<|im_start|>assistant
This image shows a Caenorhabditis elegans (C. elegans) nematode.<|im_end|>

You can use processor.apply_chat_template() to format your messages automatically.

🏃 Inference

You can run LFM2.5-VL-450M with Hugging Face transformers v5.1 or newer:

pip install transformers pillow
from transformers import AutoProcessor, AutoModelForImageTextToText
from transformers.image_utils import load_image

# Load model and processor
model_id = "LiquidAI/LFM2.5-VL-450M"
model = AutoModelForImageTextToText.from_pretrained(
    model_id,
    device_map="auto",
    dtype="bfloat16"
)
processor = AutoProcessor.from_pretrained(model_id)

# Load image and create conversation
url = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
image = load_image(url)
conversation = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": image},
            {"type": "text", "text": "What is in this image?"},
        ],
    },
]

# Generate Answer
inputs = processor.apply_chat_template(
    conversation,
    add_generation_prompt=True,
    return_tensors="pt",
    return_dict=True,
    tokenize=True,
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=64)
processor.batch_decode(outputs, skip_special_tokens=True)[0]

# This image captures the iconic Statue of Liberty standing majestically on Liberty Island in New York City. The statue, a symbol of freedom and democracy, is prominently featured in the foreground, its greenish-gray hue contrasting beautifully with the surrounding water.

Visual grounding

LFM2.5-VL-450M supports bounding box prediction:

url = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
image = load_image(url)
query = "status"
prompt = f'Detect all instances of: {query}. Response must be a JSON array: [{"label": ..., "bbox": [x1, y1, x2, y2]}, ...]. Coordinates are normalized to [0,1].'

conversation = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": image},
            {"type": "text", "text": prompt},
        ],
    },
]

# Generate Answer
inputs = processor.apply_chat_template(
    conversation,
    add_generation_prompt=True,
    return_tensors="pt",
    return_dict=True,
    tokenize=True,
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=64)
processor.batch_decode(outputs, skip_special_tokens=True)[0]

# [{"label": "statue", "bbox": [0.3, 0.25, 0.4, 0.65]}]

Tool Use

LFM2.5 supports function calling for text only input by applying the chat template with the tokenizer. See the Tool Use documentation for the full guide.

tools = [{
    "name": "get_weather",
    "description": "Get current weather for a location",
    "parameters": {
        "type": "object",
        "properties": {"location": {"type": "string"}},
        "required": ["location"]
    }
}]

messages = [{"role": "user", "content": "What's the weather in Paris?"}]

# Apply chat template with tools
inputs = processor.tokenizer.apply_chat_template(
    messages,
    tools=tools,
    add_generation_prompt=True,
    return_tensors="pt",
    return_dict=True,
)
input_ids = inputs["input_ids"].to(model.device)
outputs = model.generate(input_ids, max_new_tokens=256)
response = processor.tokenizer.decode(outputs[0, input_ids.shape[1]:], skip_special_tokens=False)

# <|tool_call_start|>[get_weather(location="Paris")]<|tool_call_end|>I am retrieving the current weather for Paris.<|im_end|>
NameDescriptionDocsNotebook
TransformersSimple inference with direct access to model internals.LinkColab link
vLLMHigh-throughput production deployments with GPU.LinkColab link
SGLangHigh-throughput production deployments with GPU.LinkColab link
llama.cppCross-platform inference with CPU offloading.LinkColab link

🔧 Fine-tuning

We recommend fine-tuning LFM2.5-VL-450M model on your use cases to maximize performance.

NotebookDescriptionLink
SFT (Unsloth)Supervised Fine-Tuning with LoRA using Unsloth.Colab link
SFT (TRL)Supervised Fine-Tuning with LoRA using TRL.Colab link

📊 Performance

LFM2.5-VL-450M improves over LFM2-VL-450M across both vision and language benchmarks, while also adding two new capabilities: bounding box prediction on RefCOCO-M and function calling support measured by BFCLv4.

Vision benchmarks

ModelMMStarRealWorldQAMMBench (dev en)MMMU (val)POPEMMVetBLINKInfoVQA (val)OCRBenchMM-IFEvalMMMBCountBenchRefCOCO-M
LFM2.5-VL-450M43.0058.4360.9132.6786.9341.1043.9243.0268445.0068.0973.3181.28
LFM2-VL-450M40.8752.0356.2734.4483.7933.8542.6144.5665733.0954.2947.64-
SmolVLM2-500M38.2049.9052.3234.1082.6729.9040.7024.6460911.2746.7961.81-

All vision benchmark scores are obtained using VLMEvalKit. Multilingual scores are based on the average of benchmarks translated by GPT-4.1-mini from English to Arabic, Chinese, French, German, Japanese, Korean, Portuguese, and Spanish.

Language benchmarks

ModelGPQAMMLU ProIFEvalMulti-IFBFCLv4
LFM2.5-VL-450M25.6619.3261.1634.6321.08
LFM2-VL-450M23.1317.2251.7526.21-
SmolVLM2-500M23.8413.5730.146.82-

📬 Contact

Citation

@article{liquidai2025lfm2,
 title={LFM2 Technical Report},
 author={Liquid AI},
 journal={arXiv preprint arXiv:2511.23404},
 year={2025}
}
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes162
Downloads
📝

No reviews yet

Be the first to review LiquidAI/LFM2.5-VL-450M!

Model Info

ProviderLiquidAI
Categoryimage
Reviews0
Avg. Rating / 5.0

Community

Likes162
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor