Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IVSMR-3.9
Paper Title CRACK DETECTION AND REFINEMENT VIA DEEP REINFORCEMENT LEARNING
Authors Jinhyung Park, Yi-Chun Chen, Yu-Jhe Li, Kris Kitani, Carnegie Mellon University, United States
SessionMLR-APPL-IVSMR-3: Machine learning for image and video sensing, modeling and representation 3
LocationArea D
Session Time:Wednesday, 22 September, 14:30 - 16:00
Presentation Time:Wednesday, 22 September, 14:30 - 16:00
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image & video sensing, modeling, and representation
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Detecting small cracks in concrete is difficult due to the complexity and thinness of cracking patterns, which requires the development of refined vision-based segmentation algorithms that can accurately characterize the details of crack defects. While existing methods are good at generally outlining cracks, due to inherent differences in shape distributions between common objects and cracks, their predictions often have disconnected segments and inaccuracy along boundaries. To this end, we develop a refinement framework using reinforcement learning (RL) that can better recognize details specific to cracks. Our method uses an RL agent to iteratively improve per-pixel crack predictions of a general segmentation model. We find that in addition to connecting gaps in predictions, the RL agent is also able to detect cracks that are missed in the original predictions. It does so by using the originally detected regions as crack priors to branch out from. Refining outputs of a commonly used per-pixel segmentation model, our method outperforms the current state-of-the-art approaches for crack segmentation. Our experiments also demonstrate that our method generalizes well to a similar task of vessel segmentation.