Back to Models
z-lab logo

z-lab/gemma-4-31B-it-PARO

z-labimage

z-lab/gemma-4-31B-it-PARO

Pairwise Rotation Quantization for Efficient Reasoning LLM Inference

Paper Blog Models PyPI

ParoQuant is the state-of-the-art INT4 quantization for LLMs. It closes the accuracy gap with FP16 while running at near-AWQ speed. Supports NVIDIA GPUs (vLLM, Transformers) and Apple Silicon (MLX). For more information, see https://github.com/z-lab/paroquant.

z-lab/gemma-4-31B-it-PARO is a 4-bit google/gemma-4-31B-it quantized with ParoQuant. Check out other ParoQuant models from the Hugging Face collection.

Quick Start

Installation

# NVIDIA GPU (CUDA 12.9)
pip install "paroquant[vllm]"

# NVIDIA GPU (CUDA 13.0)
pip install "paroquant[vllm]" "vllm==0.19.0" \
  --extra-index-url https://wheels.vllm.ai/2a69949bdadf0e8942b7a1619b229cb475beef20/cu130 \
  --extra-index-url https://download.pytorch.org/whl/cu130

# Apple Silicon
pip install "paroquant[mlx]"

Interactive Chat

python -m paroquant.cli.chat --model z-lab/gemma-4-31B-it-PARO

OpenAI-Compatible API Server

python -m paroquant.cli.serve --model z-lab/gemma-4-31B-it-PARO --port 8000

For vLLM, the arguments are passed to the vLLM server directly. See vLLM docs for more details.

For MLX, add --vlm if you wish to load the VLM components and use the model's multimodal features. For vLLM, VLM components are loaded by default and can be skipped with the server argument --language-model-only.

Docker (NVIDIA GPU)

[!NOTE] The following commands map the local cache directory to the container in order to persist kernel cache across runs. Remove -v ... to disable this behaviour.

# Interactive chat
docker run --pull=always --rm -it --gpus all --ipc=host \
  -v $HOME/.cache/paroquant:/root/.cache/paroquant \
  ghcr.io/z-lab/paroquant:chat --model z-lab/gemma-4-31B-it-PARO

# API server (port 8000)
docker run --pull=always --rm -it --gpus all --ipc=host -p 8000:8000 \
  -v $HOME/.cache/paroquant:/root/.cache/paroquant \
  ghcr.io/z-lab/paroquant:serve --model z-lab/gemma-4-31B-it-PARO

Citation

@inproceedings{liang2026paroquant,
  title     = {{ParoQuant: Pairwise Rotation Quantization for Efficient Reasoning LLM Inference}},
  author    = {Liang, Yesheng and Chen, Haisheng and Zhang, Zihan and Han, Song and Liu, Zhijian},
  booktitle = {International Conference on Learning Representations (ICLR)},
  year      = {2026}
}
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes12
Downloads
📝

No reviews yet

Be the first to review z-lab/gemma-4-31B-it-PARO!

Model Info

Providerz-lab
Categoryimage
Reviews0
Avg. Rating / 5.0

Community

Likes12
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor