Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSS-AVV.11
Paper Title AN ATTENTION FUSION NETWORK FOR EVENT-BASED VEHICLE OBJECT DETECTION
Authors Mengyun Liu, Na Qi, Yunhui Shi, Baocai Yin, Beijing University of Technology;, China
SessionSS-AVV: Special Session: Autonomous Vehicle Vision
LocationArea A
Session Time:Monday, 20 September, 13:30 - 15:00
Presentation Time:Monday, 20 September, 13:30 - 15:00
Presentation Poster
Topic Special Sessions: Autonomous Vehicle Vision
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Under the extreme conditions such as excessive light, insufficient light or high-speed motion, the detection of vehicles by frame-based cameras still has challenges. Event cameras can capture the frame and event data asynchronously, which is of great help to address the object detection under the aforementioned extreme condition. We propose a fusion network with Attention Fusion module for vehicle object detection by jointly utilizing the features of both frame and event data. The frame and event data are separately fed into the symmetric framework based on Gaussian YOLOv3 to model the bounding box (bbox) coordinates of YOLOv3 as the Gaussian parameters and predict the localization uncertainty of bbox with a redesigned cross-entropy loss function of bbox. The feature maps of these Gaussian parameter and confidence map in each layer are deeply fused in the Attention Fusion module. Finally, the feature maps of the frame and event data are concatenated to the detection layer to improve the detection accuracy. The experimental results show that the method presented in this paper outperforms the state-of-the-art methods only using the traditional frame-based network and the joint network combining the event and frame information.