Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSMR-1.9
Paper Title DEEP NEURAL NETWORKS FOR FULL-REFERENCE AND NO-REFERENCE AUDIO-VISUAL QUALITY ASSESSMENT
Authors Yuqin Cao, Xiongkuo Min, Wei Sun, Guangtao Zhai, Shanghai Jiao Tong University, China
SessionSMR-1: Image and Video Quality Assessment
LocationArea F
Session Time:Tuesday, 21 September, 13:30 - 15:00
Presentation Time:Tuesday, 21 September, 13:30 - 15:00
Presentation Poster
Topic Image and Video Sensing, Modeling, and Representation: Perception and quality models for images & video
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract In the field of audio and visual quality assessment, most of previous works only focused on the single-mode visual or audio signal. However, for multi-mode signals, such as video and the accompanying audio, the overall perceptual quality depends on both video and audio. In this paper, we proposed an objective audio-visual quality assessment (AVQA) architecture for multi-mode signals based on deep neural networks. We first use a pretrained convolutional neural network to extract features of the single video frames and the concurrent short audio segments. Then, the extracted features are fed into Gated Recurrent Unit networks for time sequence modeling. Finally, we utilize the fully connected layers to fuse the qualities of audio and visual signals into the final quality score. The proposed architecture can be applied to both full-reference and no-reference AVQA. Experimental results on the LIVE-SJTU Database prove that our model outperforms the state-of-the-art AVQA methods.