Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IP-6.3
Paper Title KNOWLEDGE TRANSFERRED FINE-TUNING FOR ANTI-ALIASED CONVOLUTIONAL NEURAL NETWORK IN DATA-LIMITED SITUATION
Authors Satoshi Suzuki, Shoichiro Takeda, Ryuichi Tanida, Hideaki Kimata, NTT Corporation, Japan; Hayaru Shouno, University of Electro-Communications, Japan
SessionMLR-APPL-IP-6: Machine learning for image processing 6
LocationArea E
Session Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Anti-aliased convolutional neural networks~(CNNs) introduce blur filters to intermediate representations in CNNs to achieve high accuracy. A promising way to build a new anti-aliased CNN is to fine-tune a pre-trained CNN, which can easily be found online, with blur filters. However, blur filters drastically degrade the pre-trained representation, so the fine-tuning needs to rebuild the representation by using massive training data. Therefore, if the training data is limited, the fine-tuning cannot work well because it induces overfitting to the limited training data. To tackle this problem, this paper proposes ``knowledge transferred fine-tuning.'' On the basis of the idea of knowledge transfer, our method transfers the knowledge from intermediate representations in the pre-trained CNN to the anti-aliased CNN while fine-tuning. We transfer only essential knowledge using a pixel-level loss that transfers detailed knowledge and a global-level loss that transfers coarse knowledge. Experimental results demonstrate that our method significantly outperforms the simple fine-tuning method.