Back to Models
MO

Motif-Technologies/Motif-Video-2B-GGUF

Motif-Technologiesvideo

Motif-Video-2B GGUF

GGUF quantized variants of Motif-Video-2B, a 2-billion parameter text-to-video diffusion transformer.

These files are intended for use with the diffusers library and allow you to run Motif-Video with reduced VRAM requirements by loading a quantized transformer while keeping the rest of the pipeline in the original precision.

Quality Comparison

Same prompt and seed across all variants (1280x736, 121 frames, 50 steps, NVIDIA H200). BF16 baseline at top, quantized variants paired below (4-bit → 8-bit). Each video is rendered at 1/2 resolution (640x368 per cell) at the original 24 fps.

BF16 Q4_0 / Q4_1 Q4_K_M / Q5_0 Q5_1 / Q5_K_M Q6_K / Q8_0

Available Files

FileQuantizationSize
motifv-2b-dev-Q4_0.ggufQ4_01.1G
motifv-2b-dev-Q4_1.ggufQ4_11.2G
motifv-2b-dev-Q4_K_M.ggufQ4_K_M1.1G
motifv-2b-dev-Q5_0.ggufQ5_01.3G
motifv-2b-dev-Q5_1.ggufQ5_11.4G
motifv-2b-dev-Q5_K_M.ggufQ5_K_M1.3G
motifv-2b-dev-Q6_K.ggufQ6_K1.6G
motifv-2b-dev-Q8_0.ggufQ8_02.0G
motifv-2b-dev-BF16.ggufBF163.7G

Recommendation: Q5_K_M or Q6_K offer a good balance between quality and file size. Q8_0 is the closest to the original BF16 quality. Q4_K_M is the most memory-efficient option for constrained environments.

Installation

Prerequisites: PyTorch with CUDA support must be installed first. See pytorch.org for your CUDA version.

pip install "transformers>=5.5.4" accelerate ftfy einops sentencepiece regex Pillow imageio imageio-ffmpeg gguf
pip install git+https://github.com/waitingcheung/diffusers.git@feat/motif-video

Usage

import torch
from diffusers import (
    AdaptiveProjectedGuidance,
    DPMSolverMultistepScheduler,
    GGUFQuantizationConfig,
    MotifVideoPipeline,
    MotifVideoTransformer3DModel,
)
from diffusers.utils import export_to_video
from huggingface_hub import hf_hub_download


# DPMSolver++ subclass that ignores pipeline-supplied sigmas and builds its own flow-matching schedule.
class FlowDPMSolver(DPMSolverMultistepScheduler):
    def set_timesteps(self, num_inference_steps=None, device=None,
                      sigmas=None, mu=None, timesteps=None):
        if sigmas is not None and num_inference_steps is None:
            num_inference_steps = len(sigmas)
        super().set_timesteps(num_inference_steps=num_inference_steps, device=device, timesteps=timesteps)


guider = AdaptiveProjectedGuidance(
    guidance_scale=8.0,
    adaptive_projected_guidance_rescale=12.0,
    adaptive_projected_guidance_momentum=0.1,
    use_original_formulation=True,
    normalization_dims="spatial",
)

variant = "Q4_K_M"  # options: Q4_0, Q4_1, Q4_K_M, Q5_0, Q5_1, Q5_K_M, Q6_K, Q8_0, BF16

ckpt_path = hf_hub_download(
    "Motif-Technologies/Motif-Video-2B-GGUF",
    filename=f"motifv-2b-dev-{variant}.gguf",
)
transformer = MotifVideoTransformer3DModel.from_single_file(
    ckpt_path,
    quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
    config="Motif-Technologies/Motif-Video-2B",
    revision="diffusers-integration",
    subfolder="transformer",
    torch_dtype=torch.bfloat16,
)

pipe = MotifVideoPipeline.from_pretrained(
    "Motif-Technologies/Motif-Video-2B",
    revision="diffusers-integration",
    torch_dtype=torch.bfloat16,
    guider=guider,
    transformer=transformer,
)

# Replace default Euler scheduler with DPMSolver++ (flow matching).
flow_shift = 15.0  # bias sampling toward earlier (high-noise) sigmas.
pipe.scheduler = FlowDPMSolver(
    num_train_timesteps=pipe.scheduler.config.get("num_train_timesteps", 1000),
    algorithm_type="dpmsolver++",
    solver_order=2,
    prediction_type="flow_prediction",
    use_flow_sigmas=True,
    flow_shift=flow_shift,
)

pipe.enable_model_cpu_offload()

prompt = (
    "A woman standing in a sunlit field as flower petals swirl around her in slow motion. "
    "Each petal floats gently through the golden light, casting tiny shadows. "
    "Her hair moves like water, and time seems to stand still."
)
negative_prompt = (
    "text overlay, graphic overlay, watermark, logo, subtitles, timestamp, "
    "broadcast graphics, UI elements, random letters, frozen pose, rigid, static expression, "
    "jerky motion, mechanical motion, discontinuous motion, flat framing, depthless, dull lighting, "
    "monotone, crushed shadows, blown-out highlights, shifting background, fading background, "
    "poor continuity, identity drift, deformation, flickering, ghosting, smearing, duplication, "
    "mutated proportions, inconsistent clothing, flat colors, desaturated, tonally compressed, "
    "poor background separation, exposure shift, uneven brightness, color balance shift"
)

generator = torch.Generator(device="cuda").manual_seed(42)
output = pipe(
    prompt=prompt,
    negative_prompt=negative_prompt,
    height=736,
    width=1280,
    num_frames=121,
    num_inference_steps=50,
    generator=generator,
    frame_rate=24,
    use_linear_quadratic_schedule=False,
)
export_to_video(output.frames[0], "output.mp4", fps=24)

Benchmark

Measured on NVIDIA H200, 1280x736, 121 frames, 50 steps:

VariantSpeed (s/it)Peak alloc (GB)Peak rsv (GB)Total (s)VRAM saved vs BF16 (rsv)
BF1623.2214.7824.931176.1
Q8_023.2413.1023.141177.01.79
Q6_K23.3412.6222.721181.72.21
Q5_K_M23.3712.3922.451183.02.48
Q5_123.3512.4722.661182.42.27
Q5_023.3512.3722.551181.92.38
Q4_K_M23.3412.1922.221181.52.71
Q4_123.2912.2622.261179.22.67
Q4_023.3112.1422.181179.82.75
  • Peak alloc = peak GPU memory occupied by live tensors (model weights + activations), via torch.cuda.max_memory_allocated.
  • Peak rsv = peak GPU memory reserved by PyTorch's caching allocator (alloc + cached free blocks), via torch.cuda.max_memory_reserved. Use this as the effective VRAM footprint when planning headroom.

Key findings:

  • Speed near-identical across all quantizations (23.4 s/it) — no dequantization overhead.
  • VRAM savings scale with quant level: Q4 saves ~2.7 GB, Q8 saves ~1.8 GB (reserved).

Notes

  • The non-transformer components (VAE, text encoder, scheduler) are loaded from the base model Motif-Technologies/Motif-Video-2B at revision="diffusers-integration" in BF16.
  • All inference is performed on CUDA. CPU inference is not supported.
Visit Website

0 reviews

5
0
4
0
3
0
2
0
1
0
Likes7
Downloads
📝

No reviews yet

Be the first to review Motif-Technologies/Motif-Video-2B-GGUF!

Model Info

ProviderMotif-Technologies
Categoryvideo
Reviews0
Avg. Rating / 5.0

Community

Likes7
Downloads

Rating Guidelines

★★★★★Exceptional
★★★★Great
★★★Good
★★Fair
Poor