Full Text:   <1028>

Summary:  <117>

CLC number: 

On-line Access: 2023-12-04

Received: 2023-08-08

Revision Accepted: 2023-12-05

Crosschecked: 2023-09-25

Cited: 0

Clicked: 621

Citations:  Bibtex RefMan EndNote GB/T7714




-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2023 Vol.24 No.11 P.1513-1519


Software development in the age of intelligence: embracing large language models with the right approach

Author(s):  Xin PENG

Affiliation(s):  School of Computer Science, Fudan University, Shanghai 200438, China

Corresponding email(s):   pengxin@fudan.edu.cn

Key Words: 

Share this article to: More |Next Article >>>

Xin PENG. Software development in the age of intelligence: embracing large language models with the right approach[J]. Frontiers of Information Technology & Electronic Engineering, 2023, 24(11): 1513-1519.

@article{title="Software development in the age of intelligence: embracing large language models with the right approach",
author="Xin PENG",
journal="Frontiers of Information Technology & Electronic Engineering",
publisher="Zhejiang University Press & Springer",

%0 Journal Article
%T Software development in the age of intelligence: embracing large language models with the right approach
%J Frontiers of Information Technology & Electronic Engineering
%V 24
%N 11
%P 1513-1519
%@ 2095-9184
%D 2023
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300537

T1 - Software development in the age of intelligence: embracing large language models with the right approach
A1 - Xin PENG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 24
IS - 11
SP - 1513
EP - 1519
%@ 2095-9184
Y1 - 2023
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300537

The emergence of large language models (LLMs), represented by ChatGPT, has had a profound impact on various fields, including software engineering, and has also aroused widespread concerns. To see a right way through the fog, we have recently been discussing and contemplating a theme of “software development in the age of LLMs,” or rather “the capability of LLMs in software development,” based on various technical literature, shared experiences, and our own preliminary explorations. Additionally, I have participated in several online interviews and discussions on the theme, which have triggered further insights and reflections. Based on the aforementioned thinking and discussions, this article has been composed to disseminate information and foster an open discussion within the academic community. LLMs still largely remain a black box, and the technology is still rapidly iterating and evolving. Moreover, the existing cases reported by practitioners and our own practical experiences with LLM-based software development are relatively limited. Therefore, many of the insights and reflections in this article may not be accurate, and they may be constantly refreshed as technology and practice continue to develop.




Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article


[1]Brooks FP Jr, 1987. No silver bullet essence and accidents of software engineering. Computer, 20(4):10-19.

[2]Dou SH, Shan JJ, Jia HX, et al., 2023. Towards understanding the capability of large language models on code clone detection: a survey. https://arxiv.org/abs/2308.01191

[3]Du XY, Liu MW, Wang KX, et al., 2023. ClassEval: a manually-crafted benchmark for evaluating LLMs on class-level code generation. https://arxiv.org/abs/2308.01861

[4]Hou XY, Zhao YJ, Liu Y, et al., 2023. Large language models for software engineering: a systematic literature review. https://arxiv.org/abs/2308.10620

[5]Liu JW, Xia CS, Wang YY, et al., 2023. Is your code generated by ChatGPT really correct? Rigorous evaluation of large language models for code generation. http://arxiv.org/abs/2305.01210

[6]Meyer B, 2023. AI does not help programmers. Commun ACM, early access.

[7]Open AI, 2023. GPT-4 technical report. https://arxiv.org/abs/2303.08774

[8]Wang JJ, Huang YC, Chen CY, et al., 2023. Software testing with large language model: survey, landscape, and vision. https://arxiv.org/abs/2307.07221

[9]Welsh M, 2023. The end of programming. Commun ACM, 66(1):34-35.

[10]Wu QY, Bansal G, Zhang JY, et al., 2023. AutoGen: enabling next-Gen LLM applications via multi-agent conversation. https://arxiv.org/abs/2308.08155

[11]Yuan ZQ, Liu JW, Zi QC, et al., 2023a. Evaluating instruction-tuned large language models on code comprehension and generation. https://arxiv.org/abs/2308.01240

[12]Yuan ZQ, Lou YL, Liu MW, et al., 2023b. No more manual tests? Evaluating and improving ChatGPT for unit test generation. https://arxiv.org/abs/2305.04207

[13]Zhao WX, Zhou K, Li JY, et al., 2023. A survey of large language models. https://arxiv.org/abs/2303.18223

[14]Zheng ZB, Ning KW, Chen JC, et al., 2023. Towards an understanding of large language models in software engineering tasks. https://arxiv.org/abs/2308.11396

Open peer comments: Debate/Discuss/Question/Opinion


Please provide your name, email address and a comment

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE