Full Text:   <1562>

Summary:  <258>

CLC number: TP181

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2023-09-18

Cited: 0

Clicked: 1354

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Bing LI

https://orcid.org/0000-0002-1251-4346

Peng YANG

https://orcid.org/0000-0002-1184-8117

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2024 Vol.25 No.1 P.64-83

http://doi.org/10.1631/FITEE.2300410


Advances and challenges in artificial intelligence text generation


Author(s):  Bing LI, Peng YANG, Yuankang SUN, Zhongjian HU, Meng YI

Affiliation(s):  School of Computer Science and Engineering, Southeast University, Nanjing 210000, China; more

Corresponding email(s):   libing@seu.edu.cn, pengyang@seu.edu.cn, syk@seu.edu.cn, huzj@seu.edu.cn

Key Words:  AI text generation, Natural language processing, Machine learning, Deep learning


Bing LI, Peng YANG, Yuankang SUN, Zhongjian HU, Meng YI. Advances and challenges in artificial intelligence text generation[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(1): 64-83.

@article{title="Advances and challenges in artificial intelligence text generation",
author="Bing LI, Peng YANG, Yuankang SUN, Zhongjian HU, Meng YI",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="25",
number="1",
pages="64-83",
year="2024",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300410"
}

%0 Journal Article
%T Advances and challenges in artificial intelligence text generation
%A Bing LI
%A Peng YANG
%A Yuankang SUN
%A Zhongjian HU
%A Meng YI
%J Frontiers of Information Technology & Electronic Engineering
%V 25
%N 1
%P 64-83
%@ 2095-9184
%D 2024
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300410

TY - JOUR
T1 - Advances and challenges in artificial intelligence text generation
A1 - Bing LI
A1 - Peng YANG
A1 - Yuankang SUN
A1 - Zhongjian HU
A1 - Meng YI
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 25
IS - 1
SP - 64
EP - 83
%@ 2095-9184
Y1 - 2024
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300410


Abstract: 
Text generation is an essential research area in artificial intelligence (AI) technology and natural language processing and provides key technical support for the rapid development of AI-generated content (AIGC). It is based on technologies such as natural language processing, machine learning, and deep learning, which enable learning language rules through training models to automatically generate text that meets grammatical and semantic requirements. In this paper, we sort and systematically summarize the main research progress in text generation and review recent text generation papers, focusing on presenting a detailed understanding of the technical models. In addition, several typical text generation application systems are presented. Finally, we address some challenges and future directions in AI text generation. We conclude that improving the quality, quantity, interactivity, and adaptability of generated text can help fundamentally advance AI text generation development.

人工智能文本生成的进展与挑战

李冰1,2,杨鹏1,2,孙元康1,2,胡中坚1,2,易梦1,2
1东南大学计算机科学与工程学院,中国南京市,210000
2东南大学计算机网络和信息集成教育部重点实验室,中国南京市,210000
摘要:文本生成是人工智能和自然语言处理的重要研究领域,为人工智能生成内容的快速发展提供了关键技术支撑。该任务基于自然语言处理、机器学习和深度学习等技术,通过训练模型学习语言规则,自动生成符合语法和语义要求的文本。本文对文本生成的主要研究进展进行梳理和系统性总结,对近几年文本生成相关文献进行综合调研,并详细介绍相关技术模型。此外,针对典型文本生成应用系统进行介绍。最后,对人工智能文本生成的挑战和未来研究方向进行分析和展望。得出以下结论,提高生成文本的质量、数量、交互性和适应性有助于从根本上推动人工智能文本生成的发展。

关键词:人工智能文本生成;自然语言处理;机器学习;深度学习

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Adamopoulou E, Moussiades L, 2020. Chatbots: history, technology, and applications. Mach Learn Appl, 2:100006.

[2]Akter M, Bansal N, Karmaker SK, 2022. Revisiting automatic evaluation of extractive summarization task: can we do better than ROUGE? Findings of the Association for Computational Linguistics, p.1547-1560.

[3]Albawi S, Mohammed TA, Al-Zawi S, 2017. Understanding of a convolutional neural network. Proc Int Conf on Engineering and Technology, p.1-6.

[4]Alexandr N, Irina O, Tatyana K, et al., 2021. Fine-tuning GPT-3 for Russian text summarization. Proc 5th Computational Methods in Systems and Software, p.748-757.

[5]Barker E, Paramita ML, Aker A, et al., 2016. The SENSEI annotated corpus: human summaries of reader comment conversations in on-line news. Proc 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, p.42-52.

[6]Barrera A, Verma R, 2012. Combining syntax and semantics for automatic extractive single-document summarization. Proc 13th Int Conf on Computational Linguistics and Intelligent Text Processing, p.366-377.

[7]Birjali M, Kasri M, Beni-Hssane A, 2021. A comprehensive survey on sentiment analysis: approaches, challenges and trends. Knowl-Based Syst, 226:107134.

[8]Cai D, Wang Y, Liu L, et al., 2022. Recent advances in retrieval-augmented text generation. Proc 45th Int ACM SIGIR Conf on Research and Development in Information Retrieval, p.3417-3419.

[9]Cao JR, Wang CW, 2018. Social media text generation based on neural network model. Proc 2nd Int Conf on Computer Science and Artificial Intelligence, p.58-61.

[10]Chen YC, Gan Z, Cheng Y, et al., 2020. Distilling knowledge learned in BERT for text generation. Proc 58th Annual Meeting of the Association for Computational Linguistics, p.7893-7905.

[11]Chen ZH, Shen YL, Song Y, et al., 2021. Cross-modal memory networks for radiology report generation. Proc 59th Annual Meeting of the Association for Computational Linguistics and the 11th Int Joint Conf on Natural Language Processing, p.5904-5914.

[12]Christian H, Agus MP, Suhartono D, 2016. Single document automatic text summarization using term frequency-inverse document frequency (TF-IDF). ComTech, 7(4):285-294.

[13]Coenen A, Davis L, Ippolito D, et al., 2021. Wordcraft: a human-AI collaborative editor for story writing.

[14]Croft R, Newlands D, Chen ZY, et al., 2021. An empirical study of rule-based and learning-based approaches for static application security testing. Proc 15th ACM/IEEE Int Symp on Empirical Software Engineering and Measurement, Article 8.

[15]Dathathri S, Madotto A, Lan J, et al., 2020. Plug and play language models: a simple approach to controlled text generation. Proc 8th Int Conf on Learning Representations.

[16]Devlin KJ, Chang MW, Lee K, 2019. BERT: pre-training of deep bidirectional Transformers for language understanding. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.4171-4186.

[17]Dey R, Salem FM, 2017. Gate-variants of gated recurrent unit (GRU) neural networks. Proc IEEE 60th Int Midwest Symp on Circuits and Systems, p.1597-1600.

[18]Ding HJ, Xu XL, 2023. SAN-T2T: an automated table-to-text generator based on selective attention network. Nat Lang Eng, First View.

[19]Doush IA, Alkhateeb F, Albsoul A, 2017. AraDaisy: a system for automatic generation of ARABIC daisy books. Int J Comput Appl Technol, 55(4):322-333.

[20]Dowling M, Lucey B, 2023. ChatGPT for (finance) research: the bananarama conjecture. Financ Res Lett, 53:103662.

[21]Evtikhiev M, Bogomolov E, Sokolov Y, et al., 2023. Out of the BLEU: how should we assess quality of the code generation models?J Syst Softw, 203:111741.

[22]Fan A, Gardent C, Braud C, et al., 2019. Using local knowledge graph construction to scale Seq2Seq models to multi-document inputs. Proc Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing, p.4186-4196.

[23]Fei H, Li CL, Ji DH, et al., 2022. Mutual disentanglement learning for joint fine-grained sentiment classification and controllable text generation. Proc 45th Int ACM SIGIR Conf on Research and Development in Information Retrieval, p.1555-1565.

[24]Feng YS, Lapata M, 2013. Automatic caption generation for news images. IEEE Trans Patt Anal Mach Intell, 35(4):797-812.

[25]Floridi L, Chiriatti M, 2020. GPT-3: its nature, scope, limits, and consequences. Minds Mach, 30(4):681-694.

[26]Gao S, Ren ZC, Zhao YH, et al., 2019. Product-aware answer generation in e-commerce question-answering. Proc 12th ACM Int Conf on Web Search and Data Mining, p.429-437.

[27]Garbacea C, Mei QZ, 2020. Neural language generation: formulation, methods, and evaluation.

[28]Glennie R, Adam T, Leos-Barajas V, et al., 2023. Hidden Markov models: pitfalls and opportunities in ecology. Methods Ecol Evol, 14(1):43-56.

[29]Goodfellow I, Pouget-Abadie J, Mirza M, et al., 2020. Generative adversarial networks. Commun ACM, 63(11):139-144.

[30]Guo ZX, Yan MX, Qi JX, et al., 2023. Few-shot table-to-text generation with prompt planning and knowledge memorization.

[31]Gupta A, Chugh D, Anjum, 2022. Automated news summarization using transformers. Int Conf on Sustainable Advanced Computing, p.249-259.

[32]He XL, Xu QK, Zeng Y, et al., 2022. CATER: intellectual property protection on text generation APIs via conditional watermarks. Proc 36th Conf on Neural Information Processing Systems.

[33]Hua XY, Wang L, 2020. PAIR: planning and iterative refinement in pre-trained Transformers for long text generation. Proc Conf on Empirical Methods in Natural Language Processing, p.781-793.

[34]Imam AT, Rousan T, Aljawarneh S, 2014. An expert code generator using rule-based and frames knowledge representation techniques. Proc 5th Int Conf on Information and Communication Systems, p.1-6.

[35]Jayasiriwardene TD, Ganegoda GU, 2020. Keyword extraction from tweets using NLP tools for collecting relevant news. Proc Int Research Conf on Smart Computing and Systems Engineering, p.129-135.

[36]Jin Q, Dhingra B, Liu ZP, et al., 2019. PubMedQA: a dataset for biomedical research question answering. Proc 9th Int Joint Conf on Natural Language Processing, p.2567-2577.

[37]Jungwirth D, Haluza D, 2023. Artificial intelligence and the sustainable development goals: an exploratory study in the context of the society domain. J Softw Eng Appl, 16(4):91-112.

[38]Karakoylu PE, Kural AG, Gulden S, 2020. The effect of artificial intelligence (AI) on new product development (NPD): a future scenario. IOP Conf Ser Mater Sci Eng, 960:022026.

[39]Keskar NS, McCann B, Varshney LR, et al., 2019. CTRL: a conditional transformer language model for controllable generation.

[40]King MR, 2023. A place for large language models in scientific publishing, apart from credited authorship. Cell Mol Bioeng, 16(2):95-98.

[41]King MR, ChatGPT, 2023. A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cell Mol Bioeng, 16(1):1-2.

[42]Kingma DP, Welling M, 2014. Auto-encoding variational Bayes. https://arxiv.org/abs/1312.6114

[43]Koncel-Kedziorski R, Bekal D, Luan Y, et al., 2019. Text generation from knowledge graphs with graph transformers. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.2284-2293.

[44]Koupaee M, Wang WY, 2018. WikiHow: a large scale text summarization dataset.

[45]Kraus S, Castellanos I, Albermann M, et al., 2016. Using Arden syntax for the generation of intelligent intensive care discharge letters. Stud Health Technol Inform, 228:471-475.

[46]Lai V, Smith-Renner A, Zhang K, et al., 2022. An exploration of post-editing effectiveness in text summarization. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.475-493.

[47]Lee JS, Hsiang J, 2020. Patent claim generation by fine-tuning OpenAI GPT-2. World Patent Inform, 62:101983.

[48]Leppänen L, Munezero M, Granroth-Wilding M, et al., 2017. Data-driven news generation for automated journalism. Proc 10th Int Conf on Natural Language Generation, p.188-197.

[49]Lewis M, Liu YH, Goyal N, et al., 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Proc 58th Annual Meeting of the Association for Computational Linguistics, p.7871-7880.

[50]Li B, Yang P, Zhao HL, et al., 2023. Hierarchical sliding inference generator for question-driven abstractive answer summarization. ACM Trans Inform Syst, 41(1):7.

[51]Li JJ, Song HZ, Li J, 2022. Transformer-based question text generation in the learning system. Proc 6th Int Conf on Innovation in Artificial Intelligence, p.50-56.

[52]Li YJ, Choi D, Chung J, et al., 2022. Competition-level code generation with AlphaCode. Science, 378(6624):1092-1097.

[53]Liang ZY, Du JP, Shao YX, et al., 2021. Gated graph neural attention networks for abstractive summarization. Neurocomputing, 431:128-136.

[54]Lin J, Madnani N, Dorr BJ, 2010. Putting the user in the loop: interactive maximal marginal relevance for query-focused summarization. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.305-308.

[55]Lin JY, Sun X, Ma SM, et al., 2018. Global encoding for abstractive summarization. Proc 56th Annual Meeting of the Association for Computational Linguistics, p.163-169.

[56]Liu TY, Wang KX, Sha L, et al., 2018. Table-to-text generation by structure-aware Seq2Seq learning. Proc 32nd AAAI Conf on Artificial Intelligencee, 32(1).

[57]Liu Y, Lapata M, 2019a. Hierarchical Transformers for multi-document summarization. Proc 57th Annual Meeting of the Association for Computational Linguistics, p.5070-5081.

[58]Liu Y, Lapata M, 2019b. Text summarization with pretrained encoders. Proc Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing, p.3730-3740.

[59]Liu Y, Wan Y, He LF, et al., 2021. KG-BART: knowledge graph-augmented BART for generative commonsense reasoning. Proc 35th AAAI Conf on Artificial Intelligence, p.6418-6425.

[60]Liu YH, Ott M, Goyal N, et al., 2019. RoBERTa: a robustly optimized BERT pretraining approach.

[61]Lund BD, Wang T, 2023. Chatting about ChatGPT: how may AI and GPT impact academia and libraries?Lib Hi Tech News, 40(3):26-29.

[62]Luo Y, Zhao SL, Li XC, et al., 2016. Text keyword extraction method based on word frequency statistics. J Comput Appl, 36(3):718-725(in Chinese).

[63]Ma C, Zhang S, Shen GH, et al., 2022. Switch-GPT: an effective method for constrained text generation under few-shot settings (student abstract). Proc 36th AAAI Conf on Artificial Intelligence, p.13011-13012.

[64]Meng R, Zhao SQ, Han SG, et al., 2017. Deep keyphrase generation. Proc 55th Annual Meeting of the Association for Computational Linguistics, p.582-592.

[65]Miller D, 2019. Leveraging BERT for extractive text summarization on lectures.

[66]Mukherjee S, 2021. Sentiment analysis. In: Mukherjee S (Ed.), MLNET Revealed: Simple Tools for Applying Machine Learning to Your Applications. Apress, Berkeley, p.113-127.

[67]Oh C, Choi J, Lee S, et al., 2020. Understanding user perception of automated news generation system. Proc CHI Conf on Human Factors in Computing Systems, p.1-13.

[68]Omar R, Mangukiya O, Kalnis P, et al., 2023. ChatGPT versus traditional question answering for knowledge graphs: current status and future directions towards knowledge graph chatbots.

[69]Özateş ŞB, Özgür A, Radev D, 2016. Sentence similarity based on dependency tree kernels for multi-document summarization. Proc 10th Int Conf on Language Resources and Evaluation, p.2833-2838.

[70]Palangi H, Deng L, Shen YL, et al., 2016. Deep sentence embedding using long short-term memory networks: analysis and application to information retrieval. IEEE/ACM Trans Audio Speech Lang Process, 24(4):694-707.

[71]Procter R, Arana-Catania M, He YL, et al., 2023. Some observations on fact-checking work with implications for computational support.

[72]Rudin C, 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell, 1(5):206-215.

[73]Saeed A, Ilić S, Zangerle E, 2019. Creative GANs for generating poems, lyrics, and metaphors.

[74]See A, Liu PJ, Manning CD, 2017. Get to the point: summarization with pointer-generator networks. Proc 55th Annual Meeting of the Association for Computational Linguistics, p.1073-1083.

[75]Selvadurai N, Matulionyte R, 2020. Reconsidering creativity: copyright protection for works generated using artificial intelligence. J Intellect Prop Law Pract, 15(7):536-543.

[76]Senftleben M, Buijtelaar L, et al., 2020. Robot creativity: an incentive-based neighbouring rights approach. Eur Intellect Prop Rev, 42(12):717.

[77]Smith S, Patwary M, Norick B, et al., 2022. Using DeepSpeed and megatron to train Megatron-Turing NLG 530B, a large-scale generative language model.

[78]Stevens K, 2022. Retrospective enhancement of bibliographic records with tables of contents and summary data and the effect on collection usage. J Aust Lib Inform Assoc, 71(4):379-387.

[79]Sun XF, Meng YX, Ao X, et al., 2022. Sentence similarity based on contexts. Trans Assoc Comput Ling, 10:573-588.

[80]Surameery NMS, Shakor MY, 2023. Use ChatGPT to solve programming bugs. Int J Inform Technol Comput Eng, 3(1):17-22.

[81]Svyatkovskiy A, Deng SK, Fu SY, et al., 2020. IntelliCode compose: code generation using transformer. Proc 28th ACM Joint Meeting on European Software Engineering Conf and Symp on the Foundations of Software Engineering, p.1433-1443.

[82]Thoppilan R, De Freitas D, Hall J, et al., 2022. LaMDA: language models for dialog applications.

[83]Tian YF, Peng NY, 2022. Zero-shot sonnet generation with discourse-level planning and aesthetics features. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.3587-3597.

[84]Tomczak JM, Welling M, 2016. Improving variational auto-encoders using householder flow.

[85]Tran OT, Bui VT, 2021. Neural text normalization in speech-to-text systems with rich features. Appl Artif Intell, 35(3):193-205.

[86]Trivedi A, Pant N, Shah P, et al., 2018. Speech to text and text to speech recognition systems—a review. IOSR J Comput Eng, 20(2):36-43.

[87]van der Lee C, Gatt A, van Miltenburg E, et al., 2019. Best practices for the human evaluation of automatically generated text. Proc 12th Int Conf on Natural Language Generation, p.355-368.

[88]Vaswani A, Shazeer N, Parmar N, et al., 2017. Attention is all you need. Proc 31st Int Conf on Neural Information Processing Systems, p.6000-6010.

[89]Vodolazova T, Lloret E, 2019. The impact of rule-based text generation on the quality of abstractive summaries. Proc Int Conf on Recent Advances in Natural Language Processing, p.1275-1284.

[90]Wang CH, Tang Y, Ma XT, et al., 2020. FAIRSEQ S2T: fast speech-to-text modeling with FAIRSEQ. Proc 1st Conf of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th Int Joint Conf on Natural Language Processing: System Demonstrations, p.33-39.

[91]Wang TM, Wan XQ, Jin HQ, 2020. AMR-to-text generation with graph Transformer. Trans Assoc Comput Ling, 8:19-33.

[92]Wang WM, Li Z, Wang JW, et al., 2017. How far we can go with extractive text summarization? Heuristic methods to obtain near upper bounds. Exp Syst Appl, 90:439-463.

[93]Xu HH, Savelka J, Ashley KD, 2021. Accounting for Sentence Position and Legal Domain Sentence Embedding in Learning to Classify Case Sentences. IOS Press, Vilnius, Lithuania, p.33-42.

[94]Yellin DM, Weiss G, 2021. Synthesizing context-free grammars from recurrent neural networks. Proc 27th Int Conf on Tools and Algorithms for the Construction and Analysis of Systems, p.351-369.

[95]Yu X, Vu NT, Kuhn J, 2019. Learning the dyck language with attention-based Seq2Seq models. Proc ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, p.138-146.

[96]Yu Y, Si XS, Hu CH, et al., 2019. A review of recurrent neural networks: LSTM cells and network architectures. Neur Comput, 31(7):1235-1270.

[97]Zhang TY, Kishore V, Wu F, et al., 2020. BERTScore: evaluating text generation with BERT. Proc 8th Int Conf on Learning Representations.

[98]Zhang XY, Zou YY, Zhang HN, et al., 2022a. Automatic product copywriting for e-commerce. Proc 36th AAAI Conf on Artificial Intelligence, p.12423-12431.

[99]Zhang XY, Shen K, Zhang C, et al., 2022b. Scenario-based multi-product advertising copywriting generation for e-commerce.

[100]Zhang YQ, Huang ML, 2019. Overview of the NTCIR-14 short text generation subtask: emotion generation challenge. Proc 14th NTCIR Conf on Evaluation of Information Access Technologies, p.316-327.

[101]Zhang ZY, Han X, Liu ZY, et al., 2019. ERNIE: enhanced language representation with informative entities. Proc 57th Annual Meeting of the Association for Computational Linguistics, p.1441-1451.

[102]Zhao GZ, Yang P, 2022. Table-based fact verification with self-labeled keypoint alignment. Proc 29th Int Conf on Computational Linguistics, p.1401-1411.

[103]Zhao L, Xu JJ, Lin JY, et al., 2020. Graph-based multi-hop reasoning for long text generation.

[104]Zhao W, Peyrard M, Liu F, et al., 2019. MoverScore: text generation evaluating with contextualized embeddings and earth mover distance. Proc Conf on Empirical Methods in Natural Language Processing and the 9th Int Joint Conf on Natural Language Processing, p.563-578.

[105]Zhao W, Strube M, Eger S, 2022. DiscoScore: evaluating text generation with BERT and discourse coherence. Proc 17th Conf of the European Chapter of the Association for Computational Linguistics, p.3865-3883.

[106]Zheng LX, Ma S, Chen ZX, et al., 2021. Ensuring the correctness of regular expressions: a review. Int J Autom Comput, 18(4):521-535.

[107]Zhu CG, Yang ZY, Gmyr R, et al., 2021. Leveraging lead bias for zero-shot abstractive news summarization. Proc 44th Int ACM SIGIR Conf on Research and Development in Information Retrieval, p.1462-1471.

[108]Zhu QH, Luo JX, 2023. Generative design ideation: a natural language generation approach. In: Gero JS (Ed.), Design Computing and Cognition’22. Springer, Cham, p.39-50.

[109]Zogopoulos V, Geurts E, Gors D, et al., 2022. Authoring tool for automatic generation of augmented reality instruction sequence for manual operations. Procedia CIRP, 106:84-89.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE