Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDTEC-7.3
Paper Title OPS-NET: OVER-PARAMETERIZED SHARING NETWORKS FOR VIDEO FRAME INTERPOLATION
Authors Zhen-Fang Wang, Yan-Jiang Wang, Shuai Shao, Bao-Di Liu, China University of Petroleum (East China), China
SessionTEC-7: Interpolation, Enhancement, Inpainting
LocationArea G
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Image and Video Processing: Interpolation, super-resolution, and mosaicing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract The video frame interpolation algorithm can improve temporal resolution by inserting non-existent frames in the video sequence. With the help of skip connections, many kernel-based methods train the deep neural networks to accurately establish the complicated spatiotemporal relationship between pixels in adjacent frames. Still, these connections are only performed in the feature dimension. To this end, we introduce the Over-Parameterized Sharing Networks (OPS-Net) to implement weight sharing under different layers, capable of integrating deep and shallow features more directly. Specifically, we over-parameterize each convolutional layer to capture movement information efficiently, where the additional trainable weights from distinct ones will be shared. After the training, the additional weights will be fused into the conventional convolutional layer and do not increase the test phase’s computation. Experimental results show that our method can generate favorable frames compared with several state-of-the-art approaches.