Full Text:   <4587>

Summary:  <352>

Suppl. Mater.: 

CLC number: Q811.211

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2023-10-27

Cited: 0

Clicked: 1317

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Junjun CHEN

https://orcid.org/0000-0001-8364-2188

Nenggan ZHENG

https://orcid.org/0000-0002-0211-8817

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2023 Vol.24 No.10 P.1482-1496

http://doi.org/10.1631/FITEE.2200529


Path guided motion synthesis for Drosophila larvae


Author(s):  Junjun CHEN, Yijun WANG, Yixuan SUN, Yifei YU, Zi'ao LIU, Zhefeng GONG, Nenggan ZHENG

Affiliation(s):  Research Institute of Basic Theories, Zhejiang Lab, Hangzhou 311121, China; more

Corresponding email(s):   1536779079@qq.com, zng@cs.zju.edu.cn

Key Words:  Motion synthesis of mollusks, Dynamic pose dataset, Morphological analysis, Long pose sequence generation


Junjun CHEN, Yijun WANG, Yixuan SUN, Yifei YU, Zi'ao LIU, Zhefeng GONG, Nenggan ZHENG. Path guided motion synthesis for Drosophila larvae[J]. Frontiers of Information Technology & Electronic Engineering, 2023, 24(10): 1482-1496.

@article{title="Path guided motion synthesis for Drosophila larvae",
author="Junjun CHEN, Yijun WANG, Yixuan SUN, Yifei YU, Zi'ao LIU, Zhefeng GONG, Nenggan ZHENG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="24",
number="10",
pages="1482-1496",
year="2023",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2200529"
}

%0 Journal Article
%T Path guided motion synthesis for Drosophila larvae
%A Junjun CHEN
%A Yijun WANG
%A Yixuan SUN
%A Yifei YU
%A Zi'ao LIU
%A Zhefeng GONG
%A Nenggan ZHENG
%J Frontiers of Information Technology & Electronic Engineering
%V 24
%N 10
%P 1482-1496
%@ 2095-9184
%D 2023
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2200529

TY - JOUR
T1 - Path guided motion synthesis for Drosophila larvae
A1 - Junjun CHEN
A1 - Yijun WANG
A1 - Yixuan SUN
A1 - Yifei YU
A1 - Zi'ao LIU
A1 - Zhefeng GONG
A1 - Nenggan ZHENG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 24
IS - 10
SP - 1482
EP - 1496
%@ 2095-9184
Y1 - 2023
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2200529


Abstract: 
The deformability and high degree of freedom of mollusks bring challenges in mathematical modeling and synthesis of motions. Traditional analytical and statistical models are limited by either rigid skeleton assumptions or model capacity, and have difficulty in generating realistic and multi-pattern mollusk motions. In this work, we present a large-scale dynamic pose dataset of Drosophila larvae and propose a motion synthesis model named Path2Pose to generate a pose sequence given the initial poses and the subsequent guiding path. The Path2Pose model is further used to synthesize long pose sequences of various motion patterns through a recursive generation method. Evaluation analysis results demonstrate that our novel model synthesizes highly realistic mollusk motions and achieves state-of-the-art performance. Our work proves high performance of deep neural networks for mollusk motion synthesis and the feasibility of long pose sequence synthesis based on the customized body shape and guiding path.

基于路径引导的果蝇幼虫运动合成

陈俊君1,2,王燚军1,孙艺璇1,余益飞1,刘子奥1,龚哲峰1,4,5,郑能干1,3
1之江实验室基础理论研究院,中国杭州市,311121
2康复大学康复科学与工程学院,中国青岛市,266114
3浙江大学求是高等研究院,中国杭州市,310027
4浙江大学医学院附属精神卫生中心附属第二医院神经生物学与神经病学系,中国杭州市,310058
5浙江大学脑科学与脑医学学院教育部脑与脑机融合前沿科学中心,医学神经生物学卫生部重点实验室,中国杭州市,310058
摘要:软体动物身体可变形性和高自由度的特点为数学建模和运动合成带来很大挑战。受限于刚体骨骼假设或模型容量,传统解析模型和统计模型难以生成逼真和多模态的软体动物运动。本文建立一个大规模果蝇幼虫动态姿态数据集,并提出一个运动合成模型(Path2Pose),通过给定一段幼虫初始运动姿态序列和引导路径生成后续运动姿态序列。进一步地,通过循环生成的方式,Path2Pose模型可以合成长时间、多模态的果蝇幼虫运动姿态序列。运动评估实验表明,Path2Pose模型可以生成高度真实的软体动物运动,并在现有同类型模型中取得最好生成效果。本文的工作证明了深度神经网络在软体动物运动合成任务中的良好性能以及通过定制软体动物体型和引导路径生成长时间运动姿态的可行性。

关键词:软体动物运动合成;动态姿态数据集;形态学分析;长时间姿态序列生成

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Aksan E, Kaufmann M, Hilliges O, 2019. Structured prediction helps 3D human motion modelling. Proc IEEE/CVF Int Conf on Computer Vision, p.7143-7152.

[2]Aksan E, Kaufmann M, Cao P, et al., 2021. A spatio-temporal transformer for 3D human motion prediction. Proc Int Conf on 3D Vision, p.565-574.

[3]Barsoum E, Kender J, Liu ZC, 2018. HP-GAN: probabilistic 3D human motion prediction via GAN. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops, p.1499-1508.

[4]Bhattacharya U, Rewkowski N, Banerjee A, et al., 2021. Text2Gestures: a transformer-based network for generating emotive body gestures for virtual agents. Proc IEEE Virtual Reality and 3D User Interfaces, p.1-10.

[5]Busso C, Deng ZG, Neumann U, et al., 2005. Natural head motion synthesis driven by acoustic prosodic features. Comput Anim Virtual Worlds, 16:283-290.

[6]Cao JK, Tang HY, Fang HS, et al., 2019. Cross-domain adaptation for animal pose estimation. Proc IEEE/CVF Int Conf on Computer Vision, p.9497-9506.

[7]Carreira J, Zisserman A, 2017. Quo Vadis, action recognition? A new model and the kinetics dataset. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.4724-4733.

[8]Coros S, Beaudoin P, van de Panne M, 2010. Generalized biped walking control. Proc ACM SIGGRAPH, p.130.

[9]Cui QJ, Sun HJ, 2021. Towards accurate 3D human motion prediction from incomplete observations. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.4799-4808.

[10]Dang Q, Yin JQ, Wang B, et al., 2019. Deep learning based 2D human pose estimation: a survey. Tsinghua Sci Technol, 24(6):663-676.

[11]Dong R, Chang Q, Ikuno S, 2021. A deep learning framework for realistic robot motion generation. Neur Comput Appl, p.1-14.

[12]Eberly D, 2007. 3D Game Engine Design: a Practical Approach to Real-Time Computer Graphics (2nd Ed.). CRC Press, Boca Raton, USA.

[13]Fragkiadaki K, Levine S, Felsen P, et al., 2015. Recurrent network models for human dynamics. Proc IEEE Int Conf on Computer Vision, p.4346-4354.

[14]Ghosh P, Song J, Aksan E, et al., 2017. Learning human motion models for long-term predictions. Proc Int Conf on 3D Vision, p.458-466.

[15]Goodfellow I, Pouget-Abadie J, Mirza M, et al., 2014. Generative adversarial networks. Commun ACM, 63(11):139-144.

[16]Guo X, Choi J, 2019. Human motion prediction via learning local structure representations and temporal dependencies. Proc 33rd AAAI Conf on Artificial Intelligence, p.‍2580-2587.

[17]He KM, Gkioxari G, Dollár P, et al., 2017. Mask R-CNN. Proc IEEE Int Conf on Computer Vision, p.2980-2988.

[18]Heusel M, Ramsauer H, Unterthiner T, et al., 2017. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. Proc 31st Int Conf on Neural Information Processing Systems, p.6629-6640.

[19]Holden D, Saito J, Komura T, 2016. A deep learning framework for character motion synthesis and editing. ACM Trans Graph, 35(4):138.

[20]Ionescu C, Papava D, Olaru V, et al., 2014. Human3.6M: large scale datasets and predictive methods for 3D human sensing in natural environments. IEEE Trans Patt Anal Mach Intell, 36(7):1325-1339.

[21]Jain A, Zamir AR, Savarese S, et al., 2016. Structural-RNN: deep learning on spatio-temporal graphs. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.‍5308-5317.

[22]Jain DK, Zareapoor M, Jain R, et al., 2020. GAN-Poser: an improvised bidirectional GAN model for human motion prediction. Neur Comput Appl, 32(18):14579-14591.

[23]Ji SW, Xu W, Yang M, et al., 2013. 3D convolutional neural networks for human action recognition. IEEE Trans Patt Anal Mach Intell, 35(1):221-231.

[24]Kalman RE, 1960. A new approach to linear filtering and prediction problems. J Basic Eng, 82(1):35-45.

[25]Kingma DP, Ba J, 2015. Adam: a method for stochastic optimization. Proc 3rd Int Conf on Learning Representations.

[26]Kundu JN, Gor M, Babu RV, 2019. BiHMP-GAN: bidirectional 3D human motion prediction GAN. Proc 33rd AAAI Conf on Artificial Intelligence, p.8553-8560.

[27]Lehrmann AM, Gehler PV, Nowozin S, 2013. A non-parametric Bayesian network prior of human pose. Proc IEEE Int Conf on Computer Vision, p.1281-1288.

[28]Li C, Zhang Z, Lee WS, et al., 2018. Convolutional sequence to sequence model for human dynamics. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.5226-5234.

[29]Li MS, Chen SH, Zhao YH, et al., 2020. Dynamic multiscale graph neural networks for 3D skeleton based human motion prediction. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.211-220.

[30]Li RL, Yang S, Ross DA, et al., 2021. AI choreographer: music conditioned 3D dance generation with AIST++. Proc IEEE/CVF Int Conf on Computer Vision, p.13381-13392.

[31]Li YR, Wang Z, Yang XS, et al., 2019. Efficient convolutional hierarchical autoencoder for human motion prediction. Vis Comput, 35(6):1143-1156.

[32]Liu C, Wang DL, Zhang H, et al., 2022. Using simulated training data of voxel-level generative models to improve 3D neuron reconstruction. IEEE Trans Med Imaging, 41(12):3624-3635.

[33]Liu LB, Yin KK, van de Panne M, et al., 2010. Sampling-based contact-rich motion control. ACM Trans Graph, 29(4):128.

[34]Liu XL, Yin JQ, Liu J, et al., 2021. TrajectoryCNN: a new spatio-temporal feature learning network for human motion prediction. IEEE Trans Circ Syst Video Technol, 31(6):2133-2146.

[35]Mao W, Liu MM, Salzmann M, et al., 2019. Learning trajectory dependencies for human motion prediction. Proc IEEE/CVF Int Conf on Computer Vision, p.9488-9496.

[36]Martinez J, Black MJ, Romero J, 2017. On human motion prediction using recurrent neural networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.4674-4683.

[37]Miyato T, Kataoka T, Koyama M, et al., 2018. Spectral normalization for generative adversarial networks. Proc 6th Int Conf on Learning Representations.

[38]Mourot L, Hoyet L, Le Clerc F, et al., 2022. A survey on deep learning for skeleton-based human animation. Comput Graph Forum, 41(1):122-157.

[39]Negrete SB, Labuguen R, Matsumoto J, et al., 2021. Multiple monkey pose estimation using OpenPose.

[40]Okajima S, Tournier M, Alnajjar FS, et al., 2018. Generation of human-like movement from symbolized information. Front Neurorobot, 12:43.

[41]Pavllo D, Grangier D, Auli M, 2018. QuaterNet: a quaternion-based recurrent model for human motion. Proc British Machine Vision Conf.

[42]Pavlovic V, Rehg JM, MacCormick J, 2000. Learning switching linear models of human motion. Proc 13th Int Conf on Neural Information Processing Systems, p.942-948.

[43]Sha T, Zhang W, Shen T, et al., 2021. Deep person generation: a survey from the perspective of face, pose and cloth synthesis.

[44]Shooter M, Malleson C, Hilton A, 2021. SyDog: a synthetic dog dataset for improved 2D pose estimation.

[45]Sok KW, Kim M, Lee J, 2007. Simulating biped behaviors from human motion data. ACM Trans Graph, 26(3):‍107.1-107.9.

[46]Stephens GJ, Johnson-Kerner B, Bialek W, et al., 2008. Dimensionality and dynamics in the behavior of C. elegans. PLoS Comput Biol, 4(4):e1000028.

[47]Sun K, Xiao B, Liu D, et al., 2019. Deep high-resolution representation learning for human pose estimation. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.5686-5696.

[48]Wang YC, Wang X, Jiang PL, et al., 2019. RNN-based human motion prediction via differential sequence representation. Proc IEEE 6th Int Conf on Cloud Computing and Intelligence Systems, p.138-143.

[49]Yan SJ, Li ZZ, Xiong YJ, et al., 2019. Convolutional sequence generation for skeleton-based action synthesis. Proc IEEE/CVF Int Conf on Computer Vision, p.4393-4401.

[50]Yekutieli Y, Sagiv-Zohar R, Hochner B, et al., 2005. Dynamic model of the octopus arm. II. Control of reaching movements. J Neurophysiol, 94(2):1459-1468.

[51]Yin KK, Loken K, van de Panne M, 2007. SIMBICON: simple biped locomotion control. ACM Trans Graph, 26(3):105-es.

[52]Yin KK, Coros S, Beaudoin P, et al., 2008. Continuation methods for adapting simulated skills. ACM Trans Graph, 27(3):1-7.

[53]Yu SZ, 2010. Hidden semi-Markov models. Artif Intell, 174(2):215-243.

[54]Zhang DJ, Wu YQ, Guo MY, et al., 2021. Deep learning methods for 3D human pose estimation under different supervision paradigms: a survey. Electronics, 10(18):2267.

[55]Zhang H, Starke S, Komura T, et al., 2018. Mode-adaptive neural networks for quadruped motion control. ACM Trans Graph, 37(4):145.

[56]Zhao R, Ji Q, 2018. An adversarial hierarchical hidden Markov model for human pose modeling and generation. Proc 32nd AAAI Conf on Artificial Intelligence, p.2636-2643.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE