Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSMR-1.6
Paper Title ATTENTION BASED NETWORK FOR NO-REFERENCE UGC VIDEO QUALITY ASSESSMENT
Authors Fuwang Yi, Shanghai Jiao Tong University, China; Mianyi Chen, Tencent, China; Wei Sun, Xiongkuo Min, Yuan Tian, Guangtao Zhai, Shanghai Jiao Tong University, China
SessionSMR-1: Image and Video Quality Assessment
LocationArea F
Session Time:Tuesday, 21 September, 13:30 - 15:00
Presentation Time:Tuesday, 21 September, 13:30 - 15:00
Presentation Poster
Topic Image and Video Sensing, Modeling, and Representation: Perception and quality models for images & video
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract The quality assessment of user generated content (UGC) videos is a challenging problem due to the absence of reference videos and their complex distortions. Traditional no-reference video quality assessment (NR-VQA) algorithms mainly target specific synthetic distortions. Less attention has been paid to authentic distortions in UGC videos, which are not distributed evenly in both the spatial and temporal domains. In this paper, we propose an end-to-end neural network model for UGC videos based on the attention mechanism. The key step in our approach is to embed the attention modules in the feature extraction network, which effectively extracts local distortion information. In addition, to exploit the temporal perception mechanism of the human visual system (HVS), the gated recurrent unit (GRU) and temporal pooling layer are integrated into the proposed model. We validate the proposed model on three public in-the-wild VQA databases: KoNViD-1k, CVD2014, and LIVE-Qualcomm. Experimental results demonstrate that the proposed method outperforms state-of-the-art NR-VQA models.