Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDMLR-APPL-IVASR-3.11
Paper Title A SPHERICAL MIXTURE MODEL APPROACH FOR 360 VIDEO VIRTUAL CINEMATOGRAPHY
Authors Chenglei Wu, Zhi Wang, Lifeng Sun, Tsinghua University, China
SessionMLR-APPL-IVASR-3: Machine learning for image and video analysis, synthesis, and retrieval 3
LocationArea E
Session Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Time:Tuesday, 21 September, 08:00 - 09:30
Presentation Poster
Topic Applications of Machine Learning: Machine learning for image & video analysis, synthesis, and retrieval
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract 360 video virtual cinematography attempts to direct a virtual camera and capture the most salient regions of 360 videos. In this paper, we propose a data-drive solution to achieve high-quality and diversified 360 cinematography based on crowdsourced viewing histories. Specifically, we try to address two problems: 1) how to locate the semantically important regions of interest (RoI) from raw data, 2) how to generate virtual camera paths that follow chronological narratives. We first design a dynamic spherical mixture model based algorithm to locate variable number of RoIs on each video frame. We then model the camera transition and chronological orders with a Bayesian network and conditional probabilities. With the above two designs, we can generate “optimal” cinematography paths based on a dynamic programming algorithm. By modeling the RoIs as spherical mixture model, we are also able to provide diversified cinematography results. We demonstrate the effectiveness of our algorithm through extensive experiments.