FramePack: Practical Video Diffusion on Consumer GPUs

FramePack lets you generate full‑length 30 fps videos, frame by frame, on laptops and desktops with only 6 GB VRAM. Join thousands of creators already accelerating their workflow with FramePack.

Get Started with FramePack →
FramePack sample output

FramePack Video Showcase

Cinematic Scene

Abstract Motion

Artistic Flow

Dynamic Landscape

Surreal Animation

Creative Sequence

All videos generated using FramePack on consumer GPUs

Try FramePack Now

Why choose FramePack?

Ultra‑Low VRAM

With FramePack, you can run advanced 13 B video models and generate up to 1,800 frames on just 6 GB of GPU memory—perfect for portable rigs.

Progressive Generation

FramePack streams frames as they're produced, so you get immediate visual feedback and never waste time waiting for long renders.

Open & Extendable

FramePack is licensed under Apache‑2.0, built on familiar PyTorch tooling, and ready for researchers, hobbyists, and production teams alike.

Who is FramePack for?

FramePack for Creators

Content Creators

Bring still images to life using FramePack for YouTube, TikTok, and marketing videos in minutes.

FramePack for Researchers

Researchers & Devs

Prototype new video diffusion ideas quickly by extending FramePack's modular Python codebase.

FramePack for Studios

Studios & Agencies

Use FramePack to iterate storyboards and dynamic ads without costly render farms.

FramePack features you'll love

Constant Context Length

FramePack compresses input frames, keeping compute costs flat regardless of video duration.

GUI & API

Launch FramePack's intuitive Gradio app or integrate directly into your Python pipeline.

Flexible Attention Kernels

Accelerate FramePack with PyTorch, Xformers, Flash‑Attn, Sage‑Attention, and more.

High Batch Training

Train FramePack models with image‑like batch sizes for sharper motion capture.

Open Weights & Examples

Reproduce official FramePack sample videos or fine‑tune on your own footage.

Extremely Fast Inference

Experience real‑time feedback: FramePack hits ~1.5 s/帧 on an RTX 4090 with optimizations.

What FramePack users say

GitHub user avatar
"Finally, video diffusion that runs on my 8 GB laptop GPU! FramePack feels as snappy as Stable Diffusion for images."
— Reddit user @ai_enthusiast

FramePack FAQ

What GPUs are supported by FramePack?

RTX 30‑, 40‑, and 50‑series cards with FP16/BF16 support. Lower‑tier cards may work but are untested.

How much VRAM does FramePack need?

A minimum of 6 GB for 60‑second, 30 fps videos using the 13 B model.

How fast is FramePack inference?

~2.5 s per frame on an RTX 4090 (1.5 s with optimizations) and proportionally slower on laptops.

Does FramePack support Windows?

Yes. A one‑click package for Windows will be released soon. Linux is fully supported today.

Can I fine‑tune FramePack models?

Absolutely! FramePack is open source and designed for research. Training scripts are included.