Publishing Service

Polishing & Checking

Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184 (print), ISSN 2095-9230 (online)

Neural mesh refinement

Abstract: Subdivision is a widely used technique for mesh refinement. Classic methods rely on fixed manually defined weighting rules and struggle to generate a finer mesh with appropriate details, while advanced neural subdivision methods achieve data-driven nonlinear subdivision but lack robustness, suffering from limited subdivision levels and artifacts on novel shapes. To address these issues, this paper introduces a neural mesh refinement (NMR) method that uses the geometric structural priors learned from fine meshes to adaptively refine coarse meshes through subdivision, demonstrating robust generalization. Our key insight is that it is necessary to disentangle the network from non-structural information such as scale, rotation, and translation, enabling the network to focus on learning and applying the structural priors of local patches for adaptive refinement. For this purpose, we introduce an intrinsic structure descriptor and a locally adaptive neural filter. The intrinsic structure descriptor excludes the non-structural information to align local patches, thereby stabilizing the input feature space and enabling the network to robustly extract structural priors. The proposed neural filter, using a graph attention mechanism, extracts local structural features and adapts learned priors to local patches. Additionally, we observe that Charbonnier loss can alleviate over-smoothing compared to L2 loss. By combining these design choices, our method gains robust geometric learning and locally adaptive capabilities, enhancing generalization to various situations such as unseen shapes and arbitrary refinement levels. We evaluate our method on a diverse set of complex three-dimensional (3D) shapes, and experimental results show that it outperforms existing subdivision methods in terms of geometry quality. See https://zhuzhiwei99.github.io/NeuralMeshRefinement for the project page.

Key words: Geometry processing; Mesh refinement; Mesh subdivision; Disentangled representation learning; Neural network; Graph attention

Chinese Summary  <1> 神经网格细化

朱志伟1,2,高翔1,2,虞露1,2,廖依伊1,2
1浙江大学信息与电子工程学院,中国杭州市,310027
2浙江省信息处理与通信网络重点实验室,中国杭州市,310027
摘要:细分是一种广泛使用的网格细化技术。经典方法依赖于固定的手工定义的加权规则,难以生成具有适当细节的更精细网格,而先进的神经细分方法虽然实现了数据驱动的非线性细分,但缺乏鲁棒性,细分级别有限,而且在新形状上会出现伪像。为解决这些问题,提出一种神经网格细化(NMR)方法,该方法从精细形状中学习几何先验,再通过细分自适应地细化粗糙网格,并展示了鲁棒的可泛化性。我们的关键见解是,有必要将网络从非结构信息(如尺度、旋转和平移)中解耦出来,使其能够专注于学习和应用局部补丁的结构先验来进行自适应细化。为此,引入内在结构描述符和局部自适应神经滤波器。内在结构描述符排除非结构信息以对齐局部补丁,从而稳定了输入特征空间,使网络能够鲁棒地提取结构先验。神经滤波器采用图注意机制,提取局部结构特征,并将学习到的先验知识应用于局部补丁。此外,我们观察到,与L2损失相比,Charbonnier损失可以减轻过度平滑。结合这些设计选择,所提方法获得了鲁棒的几何学习和局部自适应能力,增强了对未知形状和任意细化级别的泛化能力。在一组复杂的三维形状上评估了该方法,结果表明它在几何质量方面优于现有细分方法。项目页面见https://zhuzhiwei99.github.io/NeuralMeshRefinement.

关键词组:几何处理;网格细化;网格细分;解耦表征学习;神经网络;图注意力


Share this article to: More

Go to Contents

References:

<Show All>

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





DOI:

10.1631/FITEE.2400344

CLC number:

TP391

Download Full Text:

Click Here

Downloaded:

576

Clicked:

727

Cited:

0

On-line Access:

2025-06-04

Received:

2024-04-30

Revision Accepted:

2024-09-18

Crosschecked:

2025-06-04

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952276; Fax: +86-571-87952331; E-mail: jzus@zju.edu.cn
Copyright © 2000~ Journal of Zhejiang University-SCIENCE