Full Text:   <141>

Suppl. Mater.: 

CLC number: 

On-line Access: 2024-02-19

Received: 2023-02-14

Revision Accepted: 2024-02-19

Crosschecked: 2023-02-20

Cited: 0

Clicked: 183

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Junping ZHANG

https://orcid.org/0000-0002-5924-3360

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2024 Vol.25 No.1 P.6-11

http://doi.org/10.1631/FITEE.2300089


ChatGPT: potential, prospects, and limitations


Author(s):  Jie ZHOU, Pei KE, Xipeng QIU, Minlie HUANG, Junping ZHANG

Affiliation(s):  School of Computer Science, Fudan University, Shanghai 200433, China; more

Corresponding email(s):   jie_zhou@fudan.edu.cn, kepei@tsinghua.edu.cn, xpqiu@fudan.edu.cn, aihuang@tsinghua.edu.cn, jpzhang@fudan.edu.cn

Key Words: 


Jie ZHOU, Pei KE, Xipeng QIU, Minlie HUANG, Junping ZHANG. ChatGPT: potential, prospects, and limitations[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(1): 6-11.

@article{title="ChatGPT: potential, prospects, and limitations",
author="Jie ZHOU, Pei KE, Xipeng QIU, Minlie HUANG, Junping ZHANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="25",
number="1",
pages="6-11",
year="2024",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300089"
}

%0 Journal Article
%T ChatGPT: potential, prospects, and limitations
%A Jie ZHOU
%A Pei KE
%A Xipeng QIU
%A Minlie HUANG
%A Junping ZHANG
%J Frontiers of Information Technology & Electronic Engineering
%V 25
%N 1
%P 6-11
%@ 2095-9184
%D 2024
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300089

TY - JOUR
T1 - ChatGPT: potential, prospects, and limitations
A1 - Jie ZHOU
A1 - Pei KE
A1 - Xipeng QIU
A1 - Minlie HUANG
A1 - Junping ZHANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 25
IS - 1
SP - 6
EP - 11
%@ 2095-9184
Y1 - 2024
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300089


Abstract: 
Recently, OpenAI released Chat Generative Pre-trained Transformer (ChatGPT) (Schulman et al., 2022) (https://chat.openai.com), which has attracted considerable attention from the industry and academia because of its impressive abilities. This is the first time that such a variety of open tasks can be well solved within one large language model. To better understand ChatGPT, we briefly introduce its history, discuss its advantages and disadvantages, and point out several potential applications. Finally, we analyze its impact on the development of trustworthy artificial intelligence, conversational search engine, and artificial general intelligence.

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Bai YT, Jones A, Ndousse K, et al., 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. https://arxiv.org/abs/2204.05862

[2]Brooks RA, 1991. Intelligence without representation. Artif Intell, 47(1-3):139-159.

[3]Brown TB, Mann B, Ryder N, et al., 2020. Language models are few-shot learners. Proc 34th Int Conf on Neural Information Processing Systems, p.1877-1901.

[4]Chen M, Tworek J, Jun H, et al., 2021. Evaluating large language models trained on code. https://arxiv.org/abs/2107.03374

[5]Chowdhery A, Narang S, Devlin J, 2022. PaLM: scaling language modeling with pathways. https://arxiv.org/abs/2204.02311

[6]Devlin J, Chang MW, Lee K, et al., 2019. BERT: pre-training of deep bidirectional transformers for language understanding. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.4171-4186.

[7]Fedus W, Zoph B, Shazeer N, et al., 2022. Switch transformers: scaling to trillion parameter models with simple and efficient sparsity. J Mach Learn Res, 23(120):1-39.

[8]Glaese A, McAleese N, Trebacz M, et al., 2022. Improving alignment of dialogue agents via targeted human judgements. https://arxiv.org/abs/2209.14375

[9]Hoffmann J, Borgeaud S, Mensch A, et al., 2022. Training compute-optimal large language models. https://arxiv.org/abs/2203.15556

[10]Hu K, 2023. ChatGPT Sets Record for Fastest-Growing User Base—Analyst Note. https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ [Accessed on Feb. 12, 2023].

[11]Huang J, Mo ZB, Zhang ZY, et al., 2022. Behavioral control task supervisor with memory based on reinforcement learning for human-multi-robot coordination systems. Front Inform Technol Electron Eng, 23(8):1174-1188.

[12]Li L, Lin YL, Zheng NN, et al., 2017. Parallel learning: a perspective and a framework. IEEE/CAA J Autom Sin, 4(3):389-395.

[13]Lighthill J, 1973. Artificial intelligence: a general survey. In: Artificial Intelligence: a Paper Symposium. Science Research Council, London, UK.

[14]Moravec H, 1988. Mind Children. Harvard University Press, Cambridge, USA.

[15]Ouyang L, Wu J, Jiang X, et al., 2022. Training language models to follow instructions with human feedback. https://arxiv.org/abs/2203.02155

[16]Rae JW, Borgeaud S, Cai T, et al., 2021. Scaling language models: methods, analysis & insights from training Gopher. https://arxiv.org/abs/2112.11446

[17]Sanh V, Webson A, Raffel C, et al., 2021. Multitask prompted training enables zero-shot task generalization. 10th Int Conf on Learning Representations.

[18]Schulman J, Wolski F, Dhariwal P, et al., 2017. Proximal policy optimization algorithms. https://arxiv.org/abs/1707.06347

[19]Schulman J, Zoph B, Kim C, et al., 2022. ChatGPT: Optimizing Language Models for Dialogue. https://openai.com/blog/chatgpt [Accessed on Feb. 12, 2023].

[20]Stiennon N, Ouyang L, Wu J, et al., 2020. Learning to summarize from human feedback. Proc 34th Int Conf on Neural Information Processing Systems, p.3008-3021.

[21]Sun Y, Wang SH, Feng SK, et al., 2021. ERNIE 3.0: large-scale knowledge enhanced pre-training for language understanding and generation. https://arxiv.org/abs/2107.02137

[22]Vaswani A, Shazeer N, Parmar N, et al., 2017. Attention is all you need. Proc 31st Int Conf on Neural Information Processing Systems, p.6000-6010.

[23]Wang FY, Guo JB, Bu GQ, et al., 2022. Mutually trustworthy human-machine knowledge automation and hybrid augmented intelligence: mechanisms and applications of cognition, management, and control for complex systems. Front Inform Technol Electron Eng, 23(8):1142-1157.

[24]Wang FY, Miao QH, Li X, et al., 2023. What does chatGPT say: the DAO from algorithmic intelligence to linguistic intelligence. IEEE/CAA J Autom Sin, 10(3):575-579.

[25]Wang YZ, Kordi Y, Mishra S, et al., 2022. Self-Instruct: aligning language model with self generated instructions. https://arxiv.org/abs/2212.10560

[26]Wei J, Bosma M, Zhao VY, et al., 2021. Finetuned language models are zero-shot learners. 10th Int Conf on Learning Representations.

[27]Wei J, Wang XZ, Schuurmans D, et al., 2022a. Chain-of-thought prompting elicits reasoning in large language models. https://arxiv.org/abs/2201.11903

[28]Wei J, Tay Y, Bommasani R, et al., 2022b. Emergent abilities of large language models. https://arxiv.org/abs/2206.07682

[29]Weigang L, Enamoto LM, Li DL, et al., 2022. New directions for artificial intelligence: human, machine, biological, and quantum intelligence. Front Inform Technol Electron Eng, 23(6):984-990.

[30]Xue JR, Hu B, Li LX, et al., 2022. Human-machine augmented intelligence: research and applications. Front Inform Technol Electron Eng, 23(8):1139-1141.

[31]Zeng W, Ren XZ, Su T, et al., 2021. PanGu-α: large-scale autoregressive pretrained Chinese language models with auto-parallel computation. https://arxiv.org/abs/2104.12369

[32]Zhang ZY, Gu YX, Han X, et al., 2021. CPM-2: large-scale cost-effective pre-trained language models. AI Open, 2:216-224.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE