Mesh Neural Cellular Automata

Author:

Pajouheshgar Ehsan1ORCID,Xu Yitao1ORCID,Mordvintsev Alexander2ORCID,Niklasson Eyvind2ORCID,Zhang Tong1ORCID,Süsstrunk Sabine1ORCID

Affiliation:

1. School of Computer and Communication Sciences, EPFL, Lausanne, Switzerland

2. Google Research, Zurich, Switzerland

Abstract

Texture modeling and synthesis are essential for enhancing the realism of virtual environments. Methods that directly synthesize textures in 3D offer distinct advantages to the UV-mapping-based methods as they can create seamless textures and align more closely with the ways textures form in nature. We propose Mesh Neural Cellular Automata (MeshNCA), a method that directly synthesizes dynamic textures on 3D meshes without requiring any UV maps. MeshNCA is a generalized type of cellular automata that can operate on a set of cells arranged on non-grid structures such as the vertices of a 3D mesh. MeshNCA accommodates multi-modal supervision and can be trained using different targets such as images, text prompts, and motion vector fields. Only trained on an Icosphere mesh, MeshNCA shows remarkable test-time generalization and can synthesize textures on unseen meshes in real time. We conduct qualitative and quantitative comparisons to demonstrate that MeshNCA outperforms other 3D texture synthesis methods in terms of generalization and producing high-quality textures. Moreover, we introduce a way of grafting trained MeshNCA instances, enabling interpolation between textures. MeshNCA allows several user interactions including texture density/orientation controls, grafting/regenerate brushes, and motion speed/direction controls. Finally, we implement the forward pass of our MeshNCA model using the WebGL shading language and showcase our trained models in an online interactive demo, which is accessible on personal computers and smartphones and is available at https://meshnca.github.io/.

Funder

Swiss National Science Foundation

Publisher

Association for Computing Machinery (ACM)

Reference56 articles.

1. A Database and Evaluation Methodology for Optical Flow

2. Alexey Bokhovkin, Shubham Tulsiani, and Angela Dai. 2023. Mesh2Tex: Generating Mesh Textures from Image Queries. arXiv preprint arXiv:2304.05868 (2023).

3. Wenzheng Chen, Huan Ling, Jun Gao, Edward Smith, Jaakko Lehtinen, Alec Jacobson, and Sanja Fidler. 2019. Learning to predict 3d objects with an interpolation-based differentiable renderer. Advances in neural information processing systems 32 (2019).

4. Single-image SVBRDF capture with a rendering-aware deep network

5. K. C. Dharma, Clayton T. Morrison, and Bradley Walls. 2023. Texture Generation Using a Graph Generative Adversarial Network and Differentiable Rendering. In Image and Vision Computing, Wei Qi Yan, Minh Nguyen, and Martin Stommel (Eds.). Springer Nature Switzerland, Cham, 388--401.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3