Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSS-NNC.14
Paper Title Discriminative patch descriptor learning with focal triplet loss function
Authors Song Wang, Xin Guo, Yun Tie, Lin Qi, Zhengzhou University, China; Ling Guan, Ryerson University, Canada
SessionSS-NNC: Special Session: Neural Network Compression and Compact Deep Features
LocationArea B
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Special Sessions: Neural Network Compression and Compact Deep Features: From Methods to Standards
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract This paper proposes a focal triplet loss function for discriminative patch descriptor learning. The standard triplet loss function usually restrains the distance difference between the matching samples and the non-matching ones. However, along with the training procedure, the majority of triplets in each batch tend to satisfy the constraint of the loss function and produce low loss values, leading to a masquerade that the model is well-trained. To address this problem, the focal triplet loss function is proposed in this paper to weaken the impact of the easy triplets and focus training on the hard ones. By emphasizing the importance of hard triplets on the model training, the proposed loss forces the descriptor vectors with fixed dimension to carry more discriminative information from the patches. With the benefits of the focal mechanism, the proposed method achieves better performance compared to the state-of-the-art on UBC dataset for image matching task. Furthermore, to demonstrate the effectiveness of the proposed method, we extend the focal triplet loss on the cross-model retrieval task. The experimental results indicate that the proposed method can also be used to improve visual-semantic embedding learning.