CLC number: TP39
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2023-12-17
Cited: 0
Clicked: 751
Citations: Bibtex RefMan EndNote GB/T7714
Xiali LI, Yanyin ZHANG, Licheng WU, Yandong CHEN, Junzhi YU. TibetanGoTinyNet: a lightweight U-Net style network for zero learning of Tibetan Go[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(7): 924-937.
@article{title="TibetanGoTinyNet: a lightweight U-Net style network for zero learning of Tibetan Go",
author="Xiali LI, Yanyin ZHANG, Licheng WU, Yandong CHEN, Junzhi YU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="25",
number="7",
pages="924-937",
year="2024",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300493"
}
%0 Journal Article
%T TibetanGoTinyNet: a lightweight U-Net style network for zero learning of Tibetan Go
%A Xiali LI
%A Yanyin ZHANG
%A Licheng WU
%A Yandong CHEN
%A Junzhi YU
%J Frontiers of Information Technology & Electronic Engineering
%V 25
%N 7
%P 924-937
%@ 2095-9184
%D 2024
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300493
TY - JOUR
T1 - TibetanGoTinyNet: a lightweight U-Net style network for zero learning of Tibetan Go
A1 - Xiali LI
A1 - Yanyin ZHANG
A1 - Licheng WU
A1 - Yandong CHEN
A1 - Junzhi YU
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 25
IS - 7
SP - 924
EP - 937
%@ 2095-9184
Y1 - 2024
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300493
Abstract: The game of tibetan Go faces the scarcity of expert knowledge and research literature. Therefore, we study the zero learning model of tibetan Go under limited computing power resources and propose a novel scale-invariant u-Net style two-headed output lightweight network TibetanGoTinyNet. The lightweight convolutional neural networks and capsule structure are applied to the encoder and decoder of TibetanGoTinyNet to reduce computational burden and achieve better feature extraction results. Several autonomous self-attention mechanisms are integrated into TibetanGoTinyNet to capture the tibetan Go board’s spatial and global information and select important channels. The training data are generated entirely from self-play games. TibetanGoTinyNet achieves 62%-78% winning rate against other four u-Net style models including Res-UNet, Res-UNet Attention, Ghost-UNet, and Ghost Capsule-UNet. It also achieves 75% winning rate in the ablation experiments on the attention mechanism with embedded positional information. The model saves about 33% of the training time with 45%-50% winning rate for different monte-Carlo tree search (MCTS) simulation counts when migrated from 9×9 to 11×11 boards. Code for our model is available at https://github.com/paulzyy/TibetanGoTinyNet.
[1]Azad R, Bozorgpour IA, Asadi-Aghbolaghi M, et al., 2021. Deep frequency re-calibration U-Net for medical image segmentation. IEEE/CVF Int Conf on Computer Vision Workshops, p.3267-3276.
[2]Azad R, Aghdam EK, Rauland A, et al., 2022a. Medical image segmentation review: the success of U-Net.
[3]Azad R, Khosravi N, Merhof D, 2022b. SMU-Net: style matching U-Net for brain tumor segmentation with missing modalities. https://arxiv.org/abs/2204.02961v1
[4]Bougourzi F, Distante C, Dornaika F, et al., 2023. PDAtt-Unet: pyramid dual-decoder attention Unet for Covid-19 infection segmentation from CT-scans. Med Image Anal, 86:102797.
[5]Ding XW, Wang SS, 2021. Efficient Unet with depth-aware gated fusion for automatic skin lesion segmentation. J Intell Fuzzy Syst, 40(5):9963-9975.
[6]Gao YF, Wu LZ, Li HY, 2021. GomokuNet: a novel UNet-style network for Gomoku zero learning via exploiting positional information and multiscale features. IEEE Conf on Games, p.1-4.
[7]Guo CL, Szemenyei M, Yi YG, et al., 2021. SA-UNet: spatial attention U-Net for retinal vessel segmentation. 25th Int Conf on Pattern Recognition, p.1236-1242.
[8]Guo YH, Cai B, Liang PP, et al., 2022. Efficient network with ghost tied block for heart segmentation. Proc SPIE 12032, Medical Imaging 2022: Image Processing, Article 120320A.
[9]Hai JJ, Qiao K, Chen J, et al., 2019. Fully convolutional DenseNet with multiscale context for automated breast tumor segmentation. J Healthc Eng, 2019:8415485.
[10]Han K, Wang YH, Tian Q, et al., 2020. GhostNet: more features from cheap operations. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.1577-1586.
[11]He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.770-778.
[12]Heidler K, Mou LC, Baumhoer C, et al., 2022. HED-UNet: combined segmentation and edge detection for monitoring the Antarctic coastline. IEEE Trans Geosci Remote Sens, 60:4300514.
[13]Hou QB, Zhou DQ, Feng JS, 2021. Coordinate attention for efficient mobile network design. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.13708-13717.
[14]Howard AG, Zhu ML, Chen B, et al., 2017. MobileNets: efficient convolutional neural networks for mobile vision applications.
[15]Hu J, Shen L, Sun G, 2018. Squeeze-and-excitation networks. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.7132-7141.
[16]Huang G, Liu Z, Van Der Maaten L, et al., 2017. Densely connected convolutional networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.2261-2269.
[17]Huang Z, Zhao YW, Liu YH, et al., 2021. GCAUNet: a group cross-channel attention residual UNet for slice based brain tumor segmentation. Biomed Signal Process Contr, 70:102958.
[18]Ibtehaz N, Rahman MS, 2020. MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation. Neur Netw, 121:74-87.
[19]Jing JF, Wang Z, Rätsch M, et al., 2022. Mobile-Unet: an efficient convolutional neural network for fabric defect detection. Text Res J, 92(1-2):30-42.
[20]Kazerouni IA, Dooly G, Toal D, 2021. Ghost-UNet: an asymmetric encoder-decoder architecture for semantic segmentation from scratch. IEEE Access, 9:97457-97465.
[21]Kocsis L, Szepesvári C, 2006. Bandit based Monte-Carlo planning. 17th European Conf on Machine Learning, p.282-293.
[22]Mamoon S, Manzoor MA, Zhang FE, et al., 2020. SPSSNet: a real-time network for image semantic segmentation. Front Inform Technol Electron Eng, 21(12):1770-1782.
[23]Ronneberger O, Fischer P, Brox T, 2015. U-Net: convolutional networks for biomedical image segmentation. 18th Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.234-241.
[24]Sabour S, Frosst N, Hinton GE, 2017. Dynamic routing between capsules. Proc 31st Int Conf on Neural Information Processing Systems, p.3859-3869.
[25]Saeed MU, Ali G, Bin W, et al., 2021. RMU-Net: a novel residual mobile U-Net model for brain tumor segmentation from MR images. Electronics, 10(16):1962.
[26]Silver D, Huang A, Maddison CJ, et al., 2016. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484-489.
[27]Silver D, Hubert T, Schrittwieser J, et al., 2017a. Mastering chess and shogi by self-play with a general reinforcement learning algorithm.
[28]Silver D, Schrittwieser J, Simonyan K, et al., 2017b. Mastering the game of Go without human knowledge. Nature, 550(7676):354-359.
[29]Soemers DJNJ, Piette É, Stephenson M, et al., 2022. The Ludii game description language is universal.
[30]Tan MX, Le Q, 2019. EfficientNet: rethinking model scaling for convolutional neural networks. Proc 36th Int Conf on Machine Learning, p.6105-6114.
[31]Tang YH, Han K, Guo JY, et al., 2022. GhostNetV2: enhance cheap operation with long-range attention. Proc 36th Int Conf on Neural Information Processing Systems.
[32]Tian MJ, Li XL, Kong SH, et al., 2022. A modified YOLOv4 detection method for a vision-based underwater garbage cleaning robot. Front Inform Technol Electron Eng, 23(8):1217-1228.
[33]Tran M, Vo-Ho VK, Le NTH, 2022. 3DConvCaps: 3DUnet with convolutional capsule encoder for medical image segmentation. 26th Int Conf on Pattern Recognition, p.4392-4398.
[34]Trebing K, Staùczyk T, Mehrkanoon S, 2021. SmaAt-UNet: precipitation nowcasting using a small attention-UNet architecture. Patt Recogn Lett, 145:178-186.
[35]Woo S, Park J, Lee JY, et al., 2018. CBAM: convolutional block attention module. Proc 15th European Conf on Computer Vision, p.3-19.
[36]Wu YH, Gao SH, Mei J, et al., 2021. JCS: an explainable COVID-19 diagnosis system by joint classification and segmentation. IEEE Trans Image Process, 30:3113-3126.
[37]Xu YH, Li Q, He SY, et al., 2022. Ghost-Unet: an efficient convolutional neural network for spine MR image segmentation: lightweight segmentation method for spine MRI. Proc 4th Int Conf on Robotics, Intelligent Control and Artificial Intelligence, p.1159-1163.
[38]Xue LY, Lin JW, Cao XR, et al., 2019. A saliency and Gaussian net model for retinal vessel segmentation. Front Inform Technol Electron Eng, 20(8):1075-1086.
Open peer comments: Debate/Discuss/Question/Opinion
<1>