PSS: Consecutive Frame Extrapolation with Predictive Sparse Shading

Transactions on Graphics, Vol. 44, No. 197, 2025
Zhizhen Wu 1, Zhe Cao1, Yazhen Yuan2, Zhilong Yuan1, Rui Wang1, Yuchi Huo1,3*
1State Key Lab of CAD&CG, Zhejiang University, 2Tencent, 3Zhejiang Lab
*Indicates Corresponding Author

Abstract

The demand for high-frame-rate rendering keeps increasing in modern displays. Existing frame generation and super-resolution techniques accelerate rendering by reducing rendering samples across space or time. However, they rely on a uniform sampling reduction strategy, which undersamples areas with complex details or dynamic shading. To address this, we propose to sparsely shade critical areas while reusing generated pixels in low-variation areas for neural extrapolation. Specifically, we introduce the Predictive Error-Flow-eXtrapolation Network (EFXNet)-an architecture that predicts extrapolation errors, estimates flows, and extrapolates frames at once. Firstly, EFXNet leverages temporal coherence to predict extrapolation error and guide the sparse shading of dynamic areas. In addition, EFXNet employs a target-grid correlation module to estimate robust optical flows from pixel correlations rather than pixel values. Finally, EFXNet uses dedicated motion representations for the historical geometric and lighting components, respectively, to extrapolate temporally stable frames. Extensive experimental results show that, compared with state-of-the-art methods, our frame extrapolation method exhibits superior visual quality and temporal stability under a low rendering budget.

BibTeX

If you find our paper helpful, please consider citing:
@article{10.1145/3763363,
author = {Wu, Zhizhen and Cao, Zhe and Yuan, Yazhen and Yuan, Zhilong and Wang, Rui and Huo, Yuchi},
title = {Consecutive Frame Extrapolation with Predictive Sparse Shading},
year = {2025},
issue_date = {December 2025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {44},
number = {6},
issn = {0730-0301},
url = {https://doi.org/10.1145/3763363},
doi = {10.1145/3763363},
abstract = {The demand for high-frame-rate rendering keeps increasing in modern displays. Existing frame generation and super-resolution techniques accelerate rendering by reducing rendering samples across space or time. However, they rely on a uniform sampling reduction strategy, which undersamples areas with complex details or dynamic shading. To address this, we propose to sparsely shade critical areas while reusing generated pixels in low-variation areas for neural extrapolation. Specifically, we introduce the Predictive Error-Flow-eXtrapolation Network (EFXNet)-an architecture that predicts extrapolation errors, estimates flows, and extrapolates frames at once. Firstly, EFXNet leverages temporal coherence to predict extrapolation error and guide the sparse shading of dynamic areas. In addition, EFXNet employs a target-grid correlation module to estimate robust optical flows from pixel correlations rather than pixel values. Finally, EFXNet uses dedicated motion representations for the historical geometric and lighting components, respectively, to extrapolate temporally stable frames. Extensive experimental results show that, compared with state-of-the-art methods, our frame extrapolation method exhibits superior visual quality and temporal stability under a low rendering budget.},
journal = {ACM Trans. Graph.},
month = dec,
articleno = {197},
numpages = {15},
keywords = {frame generation, temporal shading reuse, error prediction, sparse shading, deep learning}
}