[ICCV 2025]

Flash-VStream Logo

Flash-VStream: Efficient Real-Time Understanding for Long Video Streams

1Tsinghua University, 2Beijing Jiaotong University, 3ByteDance Seed
*Equal contribution, Corresponding authors, Project leader.
Our previous work includes:

TL; DR

We proposed Flash-VStream, an efficient VLM with a novel Flash Memory mechanism that enables real-time understanding and Q&A of extremely long video streams. Our model achieves outstanding efficiency on EgoSchema, MLVU, LVBench, MVBench and Video-MME Benchmarks.

Flash-VStream Teaser
Comparison with previous methods. Flash-VStream can understand long videos accurately in an online manner. Here ā€œCā€ denotes critical clues for questions.
Flash-VStream Efficiency Exp
Response latency / Accuracy on EgoSchema v.s. Inference cost. Flash-VStream can respond to user queries in real time while maintaining outstanding performance.

Abstract

Benefiting from the advances in large language models and cross-modal alignment, existing multimodal large language models have achieved prominent performance in image and short video understanding. However, the understanding of long videos is still challenging, as their long-context nature results in significant computational and memory overhead. Most existing work treats long videos in the same way as short videos, which is inefficient for real-world applications and hard to generalize to even longer videos. To address these issues, we propose Flash-VStream, an efficient video language model capable of processing extremely long videos and responding to user queries in real time. Particularly, we design a Flash Memory module, containing a low-capacity context memory to aggregate long-context temporal information and model the distribution of information density, and a high-capacity augmentation memory to retrieve detailed spatial information based on this distribution. Compared to existing models, Flash-VStream achieves significant reductions in inference latency. Extensive experiments on long video benchmarks and comprehensive video benchmarks, i.e., EgoSchema, MLVU, LVBench, MVBench and Video-MME, demonstrate the state-of-the-art performance and outstanding efficiency of our method. Code is available here.

Architecture

Pipeline
Overview of Flash-VStream two-process framework. The frame handler process continuously encodes new frames. The question handler process asynchronously responds to human inquiries in real-time. Flash Memory is composed of interleaved Context Synopsis Memory and Detail Augmentation Memory, organized in chronological order. CSM is updated by clustering low resolution feature maps on an inter-frame level. DAM is updated by retrieving high resolution feature maps of the most informative frames from a feature bank.

Efficient Long Video VQA

Results
Results
Results

Ablation Studies

Results
Results

Case Study

Results

Left: PCA visualization of the Flash Memory distribution. Each point in it stands for a feature map of a single frame or a slice of memory. The CSM and DAM appropriately represent the distributional characteristics of the feature clusters. Right: Case study of Q&A. Different types of question answering cases show exceptional proficiency of the Flash-VStream model.

BibTeX

If you find these projects useful in your research, please consider citing:

@article{zhang2025flashvstream,
    title={Flash-VStream: Efficient Real-Time Understanding for Long Video Streams}, 
    author={Haoji Zhang and Yiqin Wang and Yansong Tang and Yong Liu and Jiashi Feng and Xiaojie Jin},
    journal={arXiv preprint arXiv:2506.23825},
    year={2025},
}
@article{zhang2024flashvstream,
    title={Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams},
    author={Zhang, Haoji and Wang, Yiqin and Tang, Yansong and Liu, Yong and Feng, Jiashi and Dai, Jifeng and Jin, Xiaojie},
    journal={arXiv preprint arXiv:2406.08085},
    year={2024}
}