Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDARS-10.1
Paper Title ROBUST MULTI-FRAME FUTURE PREDICTION BY LEVERAGING VIEW SYNTHESIS
Authors Kenan Emir Ak, Ying Sun, A*STAR, Singapore; Joo-Hwee Lim, A*STAR / Nanyang Technological University, Singapore
SessionARS-10: Image and Video Analysis and Synthesis
LocationArea H
Session Time:Monday, 20 September, 15:30 - 17:00
Presentation Time:Monday, 20 September, 15:30 - 17:00
Presentation Poster
Topic Image and Video Analysis, Synthesis, and Retrieval: Image & Video Synthesis, Rendering, and Visualization
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract In this paper, we focus on the problem of video prediction, i.e., future frame prediction. Most state-of-the-art techniques focus on synthesizing a single future frame at each step. However, this leads to utilizing the model's own predicted frames when synthesizing multi-step prediction, resulting in gradual performance degradation due to accumulating errors in pixels. To alleviate this issue, we propose a model that can handle multi-step prediction. Additionally, we employ techniques to leverage from view synthesis for future frame prediction, where both problems are treated independently in the literature. Our proposed method employs multiview camera pose prediction and depth-prediction networks to project the last available frame to desired future frames via differentiable point cloud renderer. For the synthesis of moving objects, we utilize an additional refinement stage. In experiments, we show that the proposed framework outperforms state-of-the-art methods in both KITTI and Cityscapes datasets.