Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDTEC-7.6
Paper Title Searching Architecture and Precision for U-net based Image Restoration Tasks
Authors Krishna Teja Chitty-Venkata, Arun Somani, Iowa State University, United States; Sreenivas Kothandaraman, Intel Corporation, United States
SessionTEC-7: Interpolation, Enhancement, Inpainting
LocationArea G
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Image and Video Processing: Interpolation, super-resolution, and mosaicing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Manually architecting Deep Neural Networks (DNNs) has led to the success of Deep Learning in many domains. However, recent DNNs designed using Neural Architecture Search (NAS) have exceeded manually designed architectures and have significantly reduced the human effort to develop complex networks. Current works use NAS to identify a cell architecture constrained by a fixed order of operations that is then replicated throughout the network. The constraints potentially limit the effectiveness of NAS in converging on a more efficient DNN architecture. In the first part of our paper, we propose Operation Search, a search on an enlarged topological space for U-net and its variants that retain efficiency. The idea is to allow for custom cells (operations and their sequence) at various levels of the network to maximize image quality while being sensitive to computation cost. In the second part of our paper, we propose custom quantization at various levels resulting in a mixed-precision network. Additionally, we increase the search efficiency by constraining the search space to use the same precision for both weights and activations at any level. This does not result in computational inefficiency because it matches the operand precisions supported by Tensor Core enabled GPUs.