Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDARS-1.10
Paper Title JOINT CO-ATTENTION AND CO-RECONSTRUCTION REPRESENTATION LEARNING FOR ONE-SHOT OBJECT DETECTION
Authors Jinghui Chu, Jiawei Feng, Peiguang Jing, Wei Lu, Tianjin University, China
SessionARS-1: Object Detection
LocationArea I
Session Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Poster
Topic Image and Video Analysis, Synthesis, and Retrieval: Image & Video Mid-Level Analysis
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract One-shot object detection aims to detect all candidate instances in a target image whose label class is unavailable in training, and only one labeled query image is given in testing. Nevertheless, insufficient utilization of the only known sample is one significant reason causing the performance degradation of current one-shot object detection models. To tackle the problem, we develop joint co-attention and co-reconstruction (CoAR) representation learning for one-shot object detection. First, we propose a high-order feature fusion operation to exploit the deep co-attention of each target-query pair, which aims to enhance the correlation of the same class. Second, we use a low-rank structure to reconstruct the target-query feature in channel level, which aims to remove the irrelevant noise and enhance the latent similarity between the region proposals in target image and the query image. Experiments on both PASCAL VOC and MS COCO datasets demonstrate that our method outperforms previous state-of-the-art algorithms.