Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IVASR-5.3
Paper Title EFFECTIVE GAIT FEATURE EXTRACTION USING TEMPORAL FUSION AND SPATIAL PARTIAL
Authors Yifan Chen, School of Computer Science and School of Artificial Intelligence, Optics and Electronics (iOPEN), China; Yang Zhao, Xuelong Li, School of Artificial Intelligence, Optics and Electronics (iOPEN), China
SessionMLR-APPL-IVASR-5: Machine learning for image and video analysis, synthesis, and retrieval 5
LocationArea C
Session Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image & video analysis, synthesis, and retrieval
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Gait recognition provides a more convenient way for human identification, as it can identify person with less cooperation and intrusion compared to other biometric features. Current gait recognition frameworks utilize a template to extract temporal feature or regard the whole person as a unit, and they obtain limited temporal information and fine-grained features. To overcome this problem, we propose a network consisting of two parts: Temporal Feature Fusion (TFF) and Fine-grained Feature Extraction (FFE). First, we extract the most representative temporal information from raw gait sequences by TFF. Next, we use the idea of partial features on fused temporal features to extract more fine-grained spatial block features. It is worth mentioning that the proposed algorithm provides an effective feature extraction framework for complex gait recognition, as it focuses on the temporal fusion for representative information, and the extraction of the fine grained spatial features. Extensive experiments illustrated that we have an outstanding performance on CASIA-B and mini-OUMVLP compared to other state-of-the-art methods including GaitSet and GaitNet. In particularly, the average rank-1 accuracy of all probe views on normal walking condition (NM) achieve 95.7%.