SketchFaceNeRF: Sketch-based Facial Generation and Editing in Neural Radiance Fields

Author:

Gao Lin12ORCID,Liu Feng-Lin12ORCID,Chen Shu-Yu1ORCID,Jiang Kaiwen13ORCID,Li Chun-Peng1ORCID,Lai Yu-Kun4ORCID,Fu Hongbo5ORCID

Affiliation:

1. Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China

2. University of Chinese Academy of Sciences, Beijing, China

3. Beijing Jiaotong University, Beijing, China

4. Cardiff University, cardiff, United Kingdom

5. City University of Hong Kong, Hongkong, China

Abstract

Realistic 3D facial generation based on Neural Radiance Fields (NeRFs) from 2D sketches benefits various applications. Despite the high realism of free-view rendering results of NeRFs, it is tedious and difficult for artists to achieve detailed 3D control and manipulation. Meanwhile, due to its conciseness and expressiveness, sketching has been widely used for 2D facial image generation and editing. Applying sketching to NeRFs is challenging due to the inherent uncertainty for 3D generation with 2D constraints, a significant gap in content richness when generating faces from sparse sketches, and potential inconsistencies for sequential multi-view editing given only 2D sketch inputs. To address these challenges, we present SketchFaceNeRF, a novel sketch-based 3D facial NeRF generation and editing method, to produce free-view photo-realistic images. To solve the challenge of sketch sparsity, we introduce a Sketch Tri-plane Prediction net to first inject the appearance into sketches, thus generating features given reference images to allow color and texture control. Such features are then lifted into compact 3D tri-planes to supplement the absent 3D information, which is important for improving robustness and faithfulness. However, during editing, consistency for unseen or unedited 3D regions is difficult to maintain due to limited spatial hints in sketches. We thus adopt a Mask Fusion module to transform free-view 2D masks (inferred from sketch editing operations) into the tri-plane space as 3D masks, which guide the fusion of the original and sketch-based generated faces to synthesize edited faces. We further design an optimization approach with a novel space loss to improve identity retention and editing faithfulness. Our pipeline enables users to flexibly manipulate faces from different viewpoints in 3D space, easily designing desirable facial models. Extensive experiments validate that our approach is superior to the state-of-the-art 2D sketch-based image generation and editing approaches in realism and faithfulness.

Funder

The National Natural Science Foundation of China

The Beijing Municipal Natural Science Foundation for Distinguished Young Scholars

ChinaPostdoctoral Science Foundation

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design

Reference81 articles.

1. HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms

2. Autodesk INC. 2019. Maya. https:/autodesk.com/maya Autodesk INC. 2019. Maya. https:/autodesk.com/maya

3. Pierre Bénard Aaron Hertzmann etal 2019. Line drawings from 3D models: A tutorial. Foundations and Trends® in Computer Graphics and Vision 11 1--2 (2019) 1--159. Pierre Bénard Aaron Hertzmann et al. 2019. Line drawings from 3D models: A tutorial. Foundations and Trends® in Computer Graphics and Vision 11 1--2 (2019) 1--159.

4. Alexander W. Bergman Petr Kellnhofer Wang Yifan Eric R. Chan David B. Lindell and Gordon Wetzstein. 2022. Generative Neural Articulated Radiance Fields. In Advances in Neural Information Processing Systems. Alexander W. Bergman Petr Kellnhofer Wang Yifan Eric R. Chan David B. Lindell and Gordon Wetzstein. 2022. Generative Neural Articulated Radiance Fields. In Advances in Neural Information Processing Systems.

5. Mikolaj Binkowski , Danica J. Sutherland , Michael Arbel , and Arthur Gretton . 2018 . Demystifying MMD GANs. In International Conference on Learning Representations. Mikolaj Binkowski, Danica J. Sutherland, Michael Arbel, and Arthur Gretton. 2018. Demystifying MMD GANs. In International Conference on Learning Representations.

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Interactive NeRF Geometry Editing With Shape Priors;IEEE Transactions on Pattern Analysis and Machine Intelligence;2023-12

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3