Back to Models
LI

LiquidAI/LFM2.5-VL-1.6B

LiquidAIimage
Liquid AI
Try LFMDocsLEAPDiscord

LFM2.5‑VL-1.6B

LFM2.5‑VL-1.6B is Liquid AI's refreshed version of the first vision-language model, LFM2-VL-1.6B, built on an updated backbone LFM2.5-1.2B-Base and tuned for stronger real-world performance. Find more about LFM2.5 family of models in our blog post.

  • Enhanced instruction following on vision and language tasks.
  • Improved multilingual vision understanding in Arabic, Chinese, French, German, Japanese, Korean, and Spanish.
  • Robust understanding of visual content with improved results on multi-image inputs, high-resolution images, and OCR.

🎥⚡️ You can try LFM2.5-VL-1.6B running locally in your browser with our real-time video stream captioning WebGPU demo 🎥⚡️

Alternatively, try the API model on the Playground.

📄 Model details

ModelParametersDescription
LFM2.5-1.2B-Base1.2BPre-trained base model for fine-tuning
LFM2.5-1.2B-Instruct1.2BGeneral-purpose instruction-tuned model
LFM2.5-1.2B-Thinking1.2BGeneral-purpose reasoning model
LFM2.5-1.2B-JP1.2BJapanese-optimized chat model
LFM2.5-VL-1.6B1.6BVision-language model with fast inference
LFM2.5-Audio-1.5B1.5BAudio-language model for speech and text I/O

LFM2.5-VL-1.6B is a general-purpose vision-language model with the following features:

  • LM Backbone: LFM2.5-1.2B-Base
  • Vision encoder: SigLIP2 NaFlex shape‑optimized 400M
  • Context length: 32,768 tokens
  • Vocabulary size: 65,536
  • Languages: English, Arabic, Chinese, French, German, Japanese, Korean, and Spanish
  • Native resolution processing: handles images up to 512*512 pixels without upscaling and preserves non-standard aspect ratios without distortion
  • Tiling strategy: splits large images into non-overlapping 512×512 patches and includes thumbnail encoding for global context
  • Inference-time flexibility: user-tunable maximum image tokens and tile count for speed/quality tradeoff without retraining
  • Generation parameters:
    • text: temperature=0.1, min_p=0.15, repetition_penalty=1.05
    • vision: min_image_tokens=64 max_image_tokens=256, do_image_splitting=True
ModelDescription
LFM2.5-VL-1.6BOriginal model checkpoint in native format. Best for fine-tuning or inference with Transformers and vLLM.
LFM2.5-VL-1.6B-GGUFQuantized format for llama.cpp and compatible tools. Optimized for CPU inference and local deployment with reduced memory usage.
LFM2.5-VL-1.6B-ONNXONNX Runtime format for cross-platform deployment. Enables hardware-accelerated inference across diverse environments (cloud, edge, mobile).
LFM2.5-VL-1.6B-MLXMLX format for Apple Silicon. Optimized for fast inference on Mac devices using the MLX framework.

We recommend using it for general vision-language workloads, OCR or document comprehension. It’s not well-suited for knowledge-intensive tasks.

Chat Template

LFM2.5-VL uses a ChatML-like format. See the Chat Template documentation for details.

<|startoftext|><|im_start|>system
You are a helpful multimodal assistant by Liquid AI.<|im_end|>
<|im_start|>user
<image>Describe this image.<|im_end|>
<|im_start|>assistant
This image shows a Caenorhabditis elegans (C. elegans) nematode.<|im_end|>

You can use processor.apply_chat_template() to format your messages automatically.

🏃 Inference

You can run LFM2.5-VL-1.6B with Hugging Face transformers v5.1 or newer:

pip install transformers pillow
from transformers import AutoProcessor, AutoModelForImageTextToText
from transformers.image_utils import load_image

# Load model and processor
model_id = "LiquidAI/LFM2.5-VL-1.6B"
model = AutoModelForImageTextToText.from_pretrained(
    model_id,
    device_map="auto",
    dtype="bfloat16"
)
processor = AutoProcessor.from_pretrained(model_id)

# Load image and create conversation
url = "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
image = load_image(url)
conversation = [
    {
        "role": "user",
        "content": [
            {"type": "image", "image": image},
            {"type": "text", "text": "What is in this image?"},
        ],
    },
]

# Generate Answer
inputs = processor.apply_chat_template(
    conversation,
    add_generation_prompt=True,
    return_tensors="pt",
    return_dict=True,
    tokenize=True,
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=64)
processor.batch_decode(outputs, skip_special_tokens=True)[0]

# This image showcases the iconic Statue of Liberty standing majestically on Liberty Island in New York Harbor. The statue is positioned on a small island surrounded by calm blue waters, with the New York City skyline visible in the background.

Tool Use

LFM2.5 supports function calling for text only input by applying the chat template with the tokenizer. See the Tool Use documentation for the full guide.

tools = [{
    "name": "get_weather",
    "description": "Get current weather for a location",
    "parameters": {
        "type": "object",
        "properties": {"location": {"type": "string"}},
        "required": ["location"]
    }
}]

messages = [{"role": "user", "content": "What's the weather in Paris?"}]

# Apply chat template with tools
inputs = processor.tokenizer.apply_chat_template(
    messages,
    tools=tools,
    add_generation_prompt=True,
    return_tensors="pt",
    return_dict=True,
)
input_ids = inputs["input_ids"].to(model.device)
outputs = model.generate(input_ids, max_new_tokens=256)
response = processor.tokenizer.decode(outputs[0, input_ids.shape[1]:], skip_special_tokens=False)

# <|tool_call_start|>[get_weather(location="Paris")]<|tool_call_end|>I am retrieving the current weather for Paris.<|im_end|>
NameDescriptionDocsNotebook
TransformersSimple inference with direct access to model internals.LinkColab link
vLLMHigh-throughput production deployments with GPU.coming soonColab link
llama.cppCross-platform inference with CPU offloading.LinkColab link

🔧 Fine-tuning

We recommend fine-tuning LFM2.5-VL-1.6B model on your use cases to maximize performance.

NotebookDescriptionLink
SFT (Unsloth)Supervised Fine-Tuning with LoRA using Unsloth.Colab link
SFT (TRL)Supervised Fine-Tuning with LoRA using TRL.Colab link

📊 Performance

ModelMMStarMM-IFEvalBLINKInfoVQA (Val)OCRBench (v2)RealWorldQAMMMU (Val)MMMB (avg)Multilingual MMBench (avg)
LFM2.5-VL-1.6B50.6752.2948.8262.7141.4464.8440.5676.9665.90
LFM2-VL-1.6B49.8746.3544.5058.3535.1165.7539.6772.1360.57
InternVL3.5-1B50.2736.1744.1960.9933.5357.1241.8968.9358.32
FastVLM-1.5B53.1324.9943.2923.9226.6161.5638.7864.8450.89

All vision benchmark scores are obtained using VLMEvalKit. Multilingual scores are based on the average of benchmarks translated by GPT-4.1-mini from English to Arabic, Chinese, French, German, Japanese, Korean, and Spanish.

📬 Contact

Citation

@article{liquidai2025lfm2,
 title={LFM2 Technical Report},
 author={Liquid AI},
 journal={arXiv preprint arXiv:2511.23404},
 year={2025}
}
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes279
Downloads
📝

No reviews yet

Be the first to review LiquidAI/LFM2.5-VL-1.6B!

Model Info

ProviderLiquidAI
Categoryimage
Reviews0
Avg. Rating / 5.0

Community

Likes279
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor