Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IVASR-2.7
Paper Title Geospatial-temporal Convolutional Neural Network for Video-Based Precipitation Intensity Recognition
Authors Chih-Wei Lin, Suhui Yang, Fujian Agriculture and Forestry University, China
SessionMLR-APPL-IVASR-2: Machine learning for image and video analysis, synthesis, and retrieval 2
LocationArea D
Session Time:Monday, 20 September, 15:30 - 17:00
Presentation Time:Monday, 20 September, 15:30 - 17:00
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image & video analysis, synthesis, and retrieval
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract In this work, we propose a new framework, called Geospatial-temporal Convolutional Neural Network (GT-CNN), and construct the video-based geospatial-temporal precipitation dataset from the surveillance cameras of the eight weather stations (sampling points) to recognize the precipitation intensity. GT-CNN has three key modules: (1) Geospatial module, (2) Temporal module, (3) Fusion module. In the geospatial module, we extract the precipitation information from each sampling point simultaneously, and that is used to construct the geospatial relationships using LSTM between various sampling points. In the temporal module, we take 3D convolution to grab the precipitation features with time information, considering a series of precipitation images for each sampling point. Finally, we generate the fusion module to fuse the geospatial and temporal features. We evaluate our framework with three metrics and compare GT-CNN with the state-of-the-art methods using the self-collected dataset. Experimental results demonstrated that our approach surpasses state-of-the-art methods concerning various metrics.