Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSS-MIA.4
Paper Title A TEACHER-STUDENT LEARNING BASED ON COMPOSED GROUND-TRUTH IMAGES FOR ACCURATE CEPHALOMETRIC LANDMARK DETECTION
Authors Yu Song, Ritsumeikan University, Japan; Xu Qiao, Shandong University, China; Yutaro Iwamoto, Yen-Wei Chen, Ritsumeikan University, Japan
SessionSS-MIA: Special Session: Deep Learning and Precision Quantitative Imaging for Medical Image Analysis
LocationArea A
Session Time:Wednesday, 22 September, 14:30 - 16:00
Presentation Time:Wednesday, 22 September, 14:30 - 16:00
Presentation Poster
Topic Special Sessions: Deep Learning and Precision Quantitative Imaging for Medical Image Analysis
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Computer-aided automatic cephalometric landmark localization has been a hot topic since last century. Recent proposed deep learning-based methods have made great contributions to this research topic. Among them, convolutional neural networks (CNN)-based regression is widely used, where ground-truth (GT) information is mainly used in the calculation of loss function, thus, mimics the difference between the predicted landmarks’ locations and the ground-truth locations through backpropagation. However, considering the limited number of annotated cephalometric data, we believe the performance can be better improved by better utilizing ground-truth information. In this paper, we propose a teacher-student learning method using GT images for accurate cephalometric detection. We first use images composed with GT landmarks as input images to train a detection model, which is treated as a teacher model. Then the teacher model is used to guide a student model, which is trained by original images, by transferring useful features. We believe the features between GT images and original images have similar domain distribution since they both represent same structure. We validate our method on public grand challenge dataset. Our method achieves better performance compared with state-of-the-art methods.