Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSS-NNC.6
Paper Title MIND THE STRUCTURE: ADOPTING STRUCTURAL INFORMATION FOR DEEP NEURAL NETWORK COMPRESSION
Authors Homayun Afrabandpey, Nokia Technologies, Finland; Anton Muravev, Tampere University, Finland; Hamed R. Tavakoli, Honglei Zhang, Francesco Cricri, Nokia Technologies, Finland; Moncef Gabbouj, Tampere University, Finland; Emre Aksu, Nokia Technologies, Finland
SessionSS-NNC: Special Session: Neural Network Compression and Compact Deep Features
LocationArea B
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Special Sessions: Neural Network Compression and Compact Deep Features: From Methods to Standards
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Deep neural networks have huge number of parameters and require large number of bits for representation. This hinders their adoption in decentralized environments where model transfer among different parties is a characteristic of the environment while the communication bandwidth is limited. Parameter quantization is a compression approach to address this challenge by reducing the number of bits required to represent a model, e.g. a neural network. However, majority of existing neural network quantization methods do not exploit structural information of layers and parameters during quantization. In this paper, focusing on Convolutional Neural Networks (CNNs), we present a novel quantization approach by employing the structural information of neural network layers and their corresponding parameters. Starting from a pre-trained CNN, we categorize network parameters into different groups based on the similarity of their layers and their spatial structure. Parameters of each group are independently clustered and the centroid of each cluster is used as representative for all parameters in the cluster. Finally, the centroids and the cluster indexes of the parameters are used as a compact representation of the parameters. Experiments with two different tasks, i.e., acoustic scene classification and image compression, demonstrate the effectiveness of the proposed approach.