Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDSMR-3.9
Paper Title LOW-RANK AND SPARSE TENSOR REPRESENTATION FOR MULTI-VIEW SUBSPACE CLUSTERING
Authors Shuqin Wang, Beijing Jiaotong University, China; Yongyong Chen, Harbin Institute of Technology, Shenzhen, China; Yigang Cen, Beijing Jiaotong University, China; Linna Zhang, Guizhou University, China; Viacheslav Voronin, Moscow State University of Technology “STANKIN”, Russian Federation
SessionSMR-3: Image and Video Representation
LocationArea F
Session Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Time:Tuesday, 21 September, 15:30 - 17:00
Presentation Poster
Topic Image and Video Sensing, Modeling, and Representation: Image & video representation
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Learning an effective affinity matrix as the input of spectral clustering to achieve promising multi-view clustering is a key issue of subspace clustering. In this paper, we propose a low-rank and sparse tensor representation (LRSTR) method that learns the affinity matrix through a self-representation tensor and retains the similarity information of the view dimensions for multi-view subspace clustering. Specifically, the proposed LRSTR method imposes the tensor nuclear norm and tensor sparse constraints on self-representation tensor to characterize the relationship between views. The optimization model is solved under the framework of alternating direction method of multiplier. Experimental results on four datasets show that the proposed LRSTR method is better than several state-of-the-art methods.