CLC number: TP391
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2023-09-06
Cited: 0
Clicked: 1183
Citations: Bibtex RefMan EndNote GB/T7714
https://orcid.org/0000-0003-1826-1850
https://orcid.org/0000-0003-0664-3149
Li WEIGANG, Mayara Chew MARINHO, Denise Leyi LI, Vitor Vasconcelos DE OLIVEIRA. Six-Writings multimodal processing with pictophonetic coding to enhance Chinese language models[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(1): 84-105.
@article{title="Six-Writings multimodal processing with pictophonetic coding to enhance Chinese language models",
author="Li WEIGANG, Mayara Chew MARINHO, Denise Leyi LI, Vitor Vasconcelos DE OLIVEIRA",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="25",
number="1",
pages="84-105",
year="2024",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300384"
}
%0 Journal Article
%T Six-Writings multimodal processing with pictophonetic coding to enhance Chinese language models
%A Li WEIGANG
%A Mayara Chew MARINHO
%A Denise Leyi LI
%A Vitor Vasconcelos DE OLIVEIRA
%J Frontiers of Information Technology & Electronic Engineering
%V 25
%N 1
%P 84-105
%@ 2095-9184
%D 2024
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300384
TY - JOUR
T1 - Six-Writings multimodal processing with pictophonetic coding to enhance Chinese language models
A1 - Li WEIGANG
A1 - Mayara Chew MARINHO
A1 - Denise Leyi LI
A1 - Vitor Vasconcelos DE OLIVEIRA
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 25
IS - 1
SP - 84
EP - 105
%@ 2095-9184
Y1 - 2024
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300384
Abstract: While large language models (LLMs) have made significant strides in natural language processing (NLP), they continue to face challenges in adequately addressing the intricacies of the Chinese language in certain scenarios. We propose a framework called six-Writings multimodal processing (SWMP) to enable direct integration of Chinese NLP (CNLP) with morphological and semantic elements. The first part of SWMP, known as six-Writings pictophonetic coding (SWPC), is introduced with a suitable level of granularity for radicals and components, enabling effective representation of Chinese characters and words. We conduct several experimental scenarios, including the following: (1) We establish an experimental database consisting of images and SWPC for Chinese characters, enabling dual-mode processing and matrix generation for CNLP. (2) We characterize various generative modes of Chinese words, such as thousands of Chinese idioms, used as question-and-answer (Q&A) prompt functions, facilitating analogies by SWPC. The experiments achieve 100% accuracy in answering all questions in the Chinese morphological data set (CA8-Mor-10177). (3) A fine-tuning mechanism is proposed to refine word embedding results using SWPC, resulting in an average relative error of ≤25% for 39.37% of the questions in the Chinese wOrd Similarity data set (COS960). The results demonstrate that SWMP/SWPC methods effectively capture the distinctive features of Chinese and offer a promising mechanism to enhance CNLP with better efficiency.
[1]Cao SS, Lu W, Zhou J, et al., 2017. Investigating stroke-level information for learning Chinese word embeddings. Proc 16th Int Semantic Web Conf.
[2]Cao SS, Lu W, Zhou J, et al., 2018. cw2vec: learning Chinese word embeddings with stroke n-gram information. Proc 32nd AAAI Conf on Artificial Intelligence, 30th Innovative Applications of Artificial Intelligence Conf, and 8th AAAI Symp on Educational Advances in Artificial Intelligence, p.5053-5061.
[3]Chen HY, Yu SH, Lin SD, 2020. Glyph2Vec: learning Chinese out-of-vocabulary word embedding from glyphs. Proc 58th Annual Meeting of the Association for Computational Linguistics, p.2865-2871.
[4]Chen XX, Xu L, Liu ZY, et al., 2015. Joint learning of character and word embeddings. Proc 24th Int Conf on Artificial Intelligence, p.1236-1242.
[5]Everitt BS, Skrondal A, 2010. The Cambridge Dictionary of Statistics (4th Ed.). Cambridge University Press, Cambridge, UK.
[6]Feng ZW, 2012. A Concise Course of Natural Language Processing. Shanghai Foreign Language Education Press, Shanghai, China(in Chinese).
[7]Gao P, 2003. Standard Tutorial of Wubi Font Input Method. Science Press, Beijing, China(in Chinese).
[8]Hamming RW, 1950. Error detecting and error correcting codes. Bell Syst Tech J, 29(2):147-160.
[9]Huang BR, Li W, 2012. Contemporary Chinese Language. Peking University Press, Beijing, China(in Chinese).
[10]Huang JJ, Qi FC, Yang CH, et al., 2019. COS960: a Chinese word similarity dataset of 960 word pairs. https://arxiv.org/abs/1906.00247
[11]Jin H, Zhang ZB, Yuan PP, 2022. Improving Chinese word representation using four corners features. IEEE Trans Big Data, 8(4):982-993.
[12]Kang RZ, Zhang HJ, Hao WN, et al., 2019. Learning Chinese word embeddings with words and subcharacter n-grams. IEEE Access, 7:42987-42992.
[13]Levy O, Goldberg Y, Dagan I, 2015. Improving distributional similarity with lessons learned from word embeddings. Trans Assoc Comput Ling, 3:211-225.
[14]Li BA, Li Y, Meng QC, 2005. Chinese Information Processing Technology: Principles and Applications. Tsinghua University Press, Beijing, China(in Chinese).
[15]Li S, Zhao Z, Hu RF, et al., 2018. Analogical reasoning on Chinese morphological and semantic relations. Proc 56th Annual Meeting of the Association for Computational Linguistics, p.138-143.
[16]Liu MD, Liang X, 2021. A method of Chinese character glyph similarity calculation based on radical knowledge representation learning. J Chin Inform Process, 35(12):47-59(in Chinese).
[17]Liu PF, Yuan WZ, Fu JL, et al., 2023. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Comput Surv, 55(9):195.
[18]Lu W, Zhang ZB, Yuan PP, et al., 2022. Learning Chinese word embeddings by discovering inherent semantic relevance in sub-characters. Proc 31st ACM Int Conf on Information & Knowledge Management, p.1369-1378.
[19]Meng YX, Wu W, Wang F, et al., 2019. Glyce: Glyph-vectors for Chinese character representations. Proc 33rd Int Conf on Neural Information Processing Systems, p.2742-2753.
[20]Mikolov T, Yih WT, Zweig G, 2013. Linguistic regularities in continuous space word representations. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.746-751.
[21]Otsu N, 1979. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern, 9(1):62-66.
[22]Petrov A, la Malfa E, Torr PH, et al., 2023. Language model tokenizers introduce unfairness between languages. https://arxiv.org/abs/2305.15425
[23]Saleh AA, Weigang L, 2023. Deep self-organizing cube: a novel multi-dimensional classifier for multiple output learning. Expert Syst Appl, 230:120627.
[24]Schulman J, Zoph B, Kim C, 2022. Introducing ChatGPT. https://openaicom/blog/chatgpt [Accessed on May 30, 2023].
[25]Sheng YC, Zhang JM, Benes B, 2021. SSN: soft shadow network for image compositing. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.4378-4388.
[26]Sheng YC, Liu YF, Zhang JM, et al., 2022. Controllable shadow generation using pixel height maps. 17th European Conf on Computer Vision, p.240-256.
[27]Sheng YC, Zhang JM, Philip J, et al., 2023. PixHt-Lab: pixel height based light effect generation for image compositing. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.16643-16653.
[28]Song JH, Li GY, Wang N, 2006. Productive representation on the phonetic-semantic relations of Shuowenjiezi. J Chin Inform Process, 20(2):53-59(in Chinese).
[29]Standardization Administration of the People’s Republic of China, 2022. Information Technology - Chinese Coded Character Set. GB 18030-2022. National Standards of People’s Republic of China(in Chinese).
[30]Su TR, Lee HY, 2017. Learning Chinese word representations from glyphs of characters. Proc Conf on Empirical Methods in Natural Language Processing, p.264-273.
[31]The Unicode Consortium, 2022. The Unicode Standard, Version 15.00. The Unicode Consortium. Mountain View, CA, USA.
[32]The Wubi Group, 2000. Wubi code: a method for inputting Chinese characters. Chin J Inform Process, 24(3):1-10(in Chinese).
[33]Turney PD, 2012. Domain and function: a dual-space model of semantic relations and compositions. J Artif Intell Res, 44(1):533-585.
[34]Wang JT, 2011. Research towards Chinese string similarity based on the clustering feature of Chinese characters. New Technol Lib Inform Ser, (2):48-53(in Chinese).
[35]Wang L, 1959. Chinese Modern Grammar. Zhonghua Book Company, Hong Kong, China(in Chinese).
[36]Wang SK, 2016. New Modern Chinese Course. Shanghai Jiao Tong University Press, Shanghai, China(in Chinese).
[37]Wang SR, Zhou W, Zhou Q, 2020. Radical and stroke-enhanced Chinese word embeddings based on neural networks. Neur Process Lett, 52(2):1109-1121.
[38]Weigang L, da Silva NC, 1999. A study of parallel neural networks. Proc Int Joint Conf on Neural Networks, p.1113-1116.
[39]Weigang L, Enamoto LM, Li DL, et al., 2022. New directions for artificial intelligence: human, machine, biological, and quantum intelligence. Front Inform Technol Electron Eng, 23(6):984-990.
[40]Xu J, Liu JW, Zhang LG, et al., 2016. Improve Chinese word embeddings by exploiting internal structure. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.1041-1050.
[41]Xu S, 1997. Discussing Writing and Explaining Characters. Yuelu Publishing House, Changsha, China(in Chinese).
[42]Yeromiyan T, 2022. The Six Types of Chinese Characters. https://studycli.org/chinese-characters/types-of-chinese-characters/ [Accessed on May 30, 2023].
[43]Yu JX, Jian X, Xin H, et al., 2017. Joint embeddings of Chinese words, characters, and fine-grained subcharacter components. Proc Conf on Empirical Methods in Natural Language Processing, p.286-291.
[44]Zhang B, 2008. Newly Edited Chinese Language (2nd Ed.). Fudan University Publishing, Shanghai, China(in Chinese).
[45]Zhang Y, Liu YG, Zhu JJ, et al., 2019. Learning Chinese word embeddings from stroke, structure and pinyin of characters. Proc 28th ACM Int Conf on Information and Knowledge Management, p.1011-1020.
[46]Zhang ZB, Zhong ZM, Yuan PP, et al., 2023. Improving entity linking in Chinese domain by sense embedding based on graph clustering. J Comput Sci Technol, 38(1):196-210.
[47]Zhao DP, Xiong HX, Tian FS, et al., 2021. Research on Chinese text similarity calculation based on sequence alignment algorithm. Lib Inform Serv, 65(11):101-112(in Chinese).
[48]Zhao YR, 2017. A Grammar of Spoken Chinese. University of California Press, CA, USA.
[49]Zhou J, Ke P, Qiu XP, et al., 2023. ChatGPT: potential, prospects, and limitations. Front Inform Technol Electron Eng, early access.
[50]Zhou JN, Wang JK, Liu GS, 2019. Multiple character embeddings for Chinese word segmentation. Proc 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, p.210-216.
[51]Zhuang CY, Zheng YJ, Huang WH, et al., 2019. Joint fine-grained components continuously enhance Chinese word embeddings. IEEE Access, 7:174699-174708.
Open peer comments: Debate/Discuss/Question/Opinion
<1>