Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDARS-10.3
Paper Title IPRNN: AN INFORMATION-PRESERVING MODEL FOR VIDEO PREDICTION USING SPATIOTEMPORAL GRUS
Authors Zheng Chang, Xinfeng Zhang, University of the Chinese Academy of Sciences, China; Shanshe Wang, Siwei Ma, Wen Gao, Peking University, China
SessionARS-10: Image and Video Analysis and Synthesis
LocationArea H
Session Time:Monday, 20 September, 15:30 - 17:00
Presentation Time:Monday, 20 September, 15:30 - 17:00
Presentation Poster
Topic Image and Video Analysis, Synthesis, and Retrieval: Image & Video Synthesis, Rendering, and Visualization
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Videos are typically encoded to low-dimensional features to save computation resources for video prediction models. However, the unacceptable information loss while encoding is restricting the performance of the predictive models. To solve this problem, in this paper, we propose an Information- Preserving Spatiotemporal Predictive Model for video prediction, denoted as IPRNN. In our method, we apply multiple skip-connections between the corresponding layers between the encoders and decoders. In this way, more useful information from the encoders can be recalled by the decoders to achieve a more satisfactory performance. Moreover, to further save the computation resources for predictive models, we design a spatiotemporal gated recurrent unit (STGRU), which can efficiently capture the spatial appearance information and temporal motion information for videos. Experimental results show that the proposed method can obtain better performance compared with other state-of-the-art methods.