Login Paper Search My Schedule Paper Index Help

My ICIP 2021 Schedule

Note: Your custom schedule will not be saved unless you create a new account or login to an existing account.
  1. Create a login based on your email (takes less than one minute)
  2. Perform 'Paper Search'
  3. Select papers that you desire to save in your personalized schedule
  4. Click on 'My Schedule' to see the current list of selected papers
  5. Click on 'Printable Version' to create a separate window suitable for printing (the header and menu will appear, but will not actually print)

Paper Detail

Paper IDCOVID-IP-2.4
Paper Title A 3D model-based approach for fitting masks to faces in the wild
Authors Je Hyeong Hong, Korea Institute of Science and Technology, Republic of Korea; Hanjo Kim, Seoul National University, Republic of Korea; Minsoo Kim, University of Science and Technology, Republic of Korea; Gi Pyo Nam, Junghyun Cho, Korea Institute of Science and Technology, Republic of Korea; Hyeong-Seok Ko, Seoul National University, Republic of Korea; Ig-Jae Kim, University of Science and Technology, Republic of Korea
SessionCOVID-IP-2: COVID Related Image Processing 2
LocationArea A
Session Time:Tuesday, 21 September, 13:30 - 15:00
Presentation Time:Tuesday, 21 September, 13:30 - 15:00
Presentation Poster
Topic COVID-Related Image Processing: COVID-related image processing
IEEE Xplore Open Preview  Click here to view in IEEE Xplore
Abstract Face recognition now requires a large number of labelled masked face images in the era of this unprecedented COVID-19 pandemic. Unfortunately, the rapid spread of the virus has left us little time to prepare for such dataset in the wild. To circumvent this issue, we present a 3D model-based approach called WearMask3D for augmenting face images of various poses to the masked face counterparts. Our method proceeds by first fitting a 3D morphable model on the input image, second overlaying the mask surface onto the face model and warping the respective mask texture, and last projecting the 3D mask back to 2D. The mask texture is adapted based on the brightness and resolution of the input image. By working in 3D, our method can produce more natural masked faces of diverse poses from a single mask texture. To compare precisely between different augmentation approaches, we have constructed a dataset comprising masked and unmasked faces with labels called MFW-mini. Experimental results demonstrate WearMask3D produces more realistic masked faces, and utilizing these images for training leads to state-of-the-art recognition accuracy for masked faces.