Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDARS-8.11
Paper Title PERCEPTUAL QUALITY ASSESSMENT OF DIBR SYNTHESIZED VIEWS USING SALIENCY BASED DEEP FEATURES
Authors Shubham Chaudhary, Alokendu Mazumder, Vinit Jakhetiya, Deebha Mumtaz, Badri Subudhi, Indian Institute of Technology Jammu, India
SessionARS-8: Image and Video Mid-Level Analysis
LocationArea I
Session Time:Monday, 20 September, 13:30 - 15:00
Presentation Time:Monday, 20 September, 13:30 - 15:00
Presentation Poster
Topic Image and Video Analysis, Synthesis, and Retrieval: Image & Video Mid-Level Analysis
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract In recent years, Depth-Image-Based-Rendering (DIBR) synthesized views have gained popularity due to their numerous visual media applications. Consequently, the research in their quality assessment (QA) has also gained momentum. In this work, we propose an efficient metric to estimate the perceptual quality of DIBR synthesized views via the extraction of Deep-features. These Deep-features are extracted from a pretrained CNN model. Generally, in DIBR synthesized views, geometric distortions arise near the objects due to occlusion, and the human visual system is quite sensitive towards these objects. On the other end, saliency maps are efficiently able to highlight perceptually important objects. With this intuition, instead of extracting deep features directly from DIBR synthesized views, we obtain the refined feature vector from their corresponding saliency maps. Also, most of the pixels with geometric distortions have a nearly similar impact on the perceptual quality of 3D synthesized views. Considering this, we propose to fuse the feature maps using the cosine similarity measure based upon the deviation of one feature vector from another. It may also be emphasized that no training is performed in the proposed algorithm, and all the features are extracted from the pre-trained vanilla VGG-16 architecture. The proposed metric, when applied to the standard database, results in PLCC of 0.762 and SRCC equal to 0.7513, which is better than the existing state-of-the-art QA metrics.