Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IVASR-1.11
Paper Title FEW SHOT LEARNING FOR INFRA-RED OBJECT RECOGNITION USING ANALYTICALLY DESIGNED LOW LEVEL FILTERS FOR DATA REPRESENTATION
Authors Maliha Arif, Abhijit Mahalanobis, University of Central Florida, United States
SessionMLR-APPL-IVASR-1: Machine learning for image and video analysis, synthesis, and retrieval 1
LocationArea D
Session Time:Monday, 20 September, 13:30 - 15:00
Presentation Time:Monday, 20 September, 13:30 - 15:00
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image & video analysis, synthesis, and retrieval
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract It is well known that deep convolutional neural networks (CNNs) generalize well over large number of classes when ample training data is available. However, training with smaller datasets does not always achieve robust performance. In such cases, we show that using analytically derived filters in the lowest layer enables a network to achieve better performance than learning from scratch using a relatively small dataset. These class-agnostic filters represent the underlying manifold of the data space, and also generalize to new or unknown classes which may occur on the same manifold. This directly enables new classes to be learned with very few images by simply fine-tuning the final few layers of the network. We illustrate the advantages of our method using the publicly available set of infra-red images of vehicular ground targets. We compare a simple CNN trained using our method with transfer learning performed using the VGG-16 network, and show that when the number of training images is limited, the proposed approach not only achieves better results on the trained classes, but also outperforms a standard network for learning a new object class.