CLC number: TP391
On-line Access: 2024-12-26
Received: 2023-10-20
Revision Accepted: 2024-12-26
Crosschecked: 2024-02-19
Cited: 0
Clicked: 1181
Jian GUO, Saizhuo WANG, Lionel M. NI, Heung-Yeung SHUM. Quant 4.0: engineering quantitative investment with automated, explainable, and knowledge-driven artificial intelligence[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2300720 @article{title="Quant 4.0: engineering quantitative investment with automated, explainable, and knowledge-driven artificial intelligence", %0 Journal Article TY - JOUR
Quant 4.0:基于自动化、可解释、知识驱动人工智能的量化投资工程1粤港澳大湾区数字经济研究院,中国深圳市,518045 2香港科技大学,中国香港特别行政区,999077 3香港科技大学(广州),中国广州市,511453 摘要:量化投资(quant)是一个结合了金融工程、计算机科学、数学、统计学等学科的交叉领域。在过去几十年里,量化投资已成为主流投资方法之一,并经历了三代发展:第一代量化投资(quant 1.0)通过数学建模交易发现市场中被错误定价的资产;第二代量化投资(quant 2.0)将量化研究流程从小型"策略作坊"转移到大型"alpha工厂";第三代量化投资(quant 3.0)应用深度学习技术发现复杂的非线性定价规则。尽管在预测方面有其优势,但深度学习技术的成功仍依赖于极大的数据量,并需要大量人工劳动来对这些神经网络"黑箱"模型进行调优。为解决这些限制,本文提出"quant 4.0"的概念,并从工程视角展望下一代量化投资技术。Quant 4.0有3个关键组成部份。首先,自动化人工智能(AI)基于"算法产生算法,模型建立模型,AI创造AI"的理念,将量化策略研发流程从传统的手工建模转变为先进的自动化建模。其次,可解释AI技术能够更好地理解和解释由机器学习黑箱模型做出的投资决策,并解释复杂和隐藏的风险暴露。第三,知识驱动AI能够与以深度学习为代表的数据驱动AI互补,将先验知识纳入建模过程,从而提升量化方法在价值投资等场景下的表现。同时,综合以上3个要素,我们讨论如何将"quant 4.0"的理念实现为一个具体的系统。此外,讨论了大型语言模型在量化投资中的应用。最后,提出量化投资领域10个具有挑战性的问题,讨论了潜在解决方案、研究方向和未来趋势。 关键词组: Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article
Reference[1]Abdul Karim Z, Muhamad Fahmi FSR, Abdul Karim B, et al., 2022. Market sentiments and firm-level equity returns: panel evidence of Malaysia. Econ Res-Ekon Istraž, 35(1):5253-5272. [2]Alattas K, Alkaabi A, Alsaud AB, 2021. An overview of artificial general intelligence: recent developments and future challenges. J Comput Sci, 17(4):364-370. [3]Ang G, Lim EP, 2021. Learning knowledge-enriched company embeddings for investment management. Proc 2nd ACM Int Conf on AI in Finance, Article 25. [4]Bachelier L, 1900. Théorie de la spéculation. Ann Sci L’cole Norm Supér, 17:21-86 (in French). [5]Bai SJ, Kolter JZ, Koltun V, 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. https://arxiv.org/abs/1803.01271 [6]Bender G, Kindermans PJ, Zoph B, et al., 2018. Understanding and simplifying one-shot architecture search. Proc 35th Int Conf on Machine Learning, p.549-558. [7]Bengio Y, 2022. GFlowNets and System 2 Deep Learning. https://www.microsoft.com/en-us/research/video/gflownets-and-system-2-deep-learning/ [Accessed on Nov. 10, 2022]. [8]Bergstra J, Bengio Y, 2012. Random search for hyper-parameter optimization. J Mach Learn Res, 13(10):281-305. [9]Biggio L, Bendinelli T, Neitz A, et al., 2021. Neural symbolic regression that scales. Proc 38th Int Conf on Machine Learning, p.936-945. [10]Black F, Scholes M, 1973. The pricing of options and corporate liabilities. J Polit Econ, 81(3):637-654. [11]Bordes A, Usunier N, Garcia-Durán A, et al., 2013. Translating embeddings for modeling multi-relational data. Proc 26th Int Conf on Neural Information Processing Systems, p.2787-2795. [12]Bottou L, Peters J, Quiñonero-Candela J, et al., 2013. Counterfactual reasoning and learning systems: the example of computational advertising. J Mach Learn Res, 14(1):3207-3260. [13]Breiman L, 1996a. Stacked regressions. Mach Learn, 24(1):49-64. [14]Breiman L, 1996b. Bagging predictors. Mach Learn, 24(2):123-140. [15]Breiman L, 1998. Arcing classifier (with discussion and a rejoinder by the author). Ann Statist, 26(3):801-849. [16]Brown TB, Mann B, Ryder N, et al., 2020. Language models are few-shot learners. Proc 34th Int Conf on Neural Information Processing Systems. [17]Chen M, Tworek J, Jun H, et al., 2021. Evaluating large language models trained on code. https://arxiv.org/abs/2107.03374 [18]Chen TX, Chen W, Du LY, 2021. An empirical study of financial factor mining based on gene expression programming. Proc 4th Int Conf on Advanced Electronic Materials, Computers and Software Engineering, p.1113-1117. [19]Cheng DW, Yang FZ, Wang XY, et al., 2020. Knowledge graph-based event embedding framework for financial quantitative investments. Proc 43rd Int ACM SIGIR Conf on Research and Development in Information Retrieval, p.2221-2230. [20]Cheng Y, Wang D, Zhou P, et al., 2018. Model compression and acceleration for deep neural networks: the principles, progress, and challenges. IEEE Signal Process Mag, 35(1):126-136. [21]Codd EF, 1970. A relational model of data for large shared data banks. Commun ACM, 13(6):377-387. [22]Coleman T, 2011. A Practical Guide to Risk Management. https://papers.ssrn.com/abstract=2586032 [Accessed on Nov. 10, 2022]. [23]Cong J, Lau J, Liu G, et al., 2022. FPGA HLS today: successes, challenges, and opportunities. ACM Trans Reconfig Technol Syst, 15(4):51. [24]de Lange M, Aljundi R, Masana M, et al., 2022. A continual learning survey: defying forgetting in classification tasks. IEEE Trans Patt Anal Mach Intell, 44(7):3366-3385. [25]Deng SM, Zhang NY, Zhang W, et al., 2019. Knowledge-driven stock trend prediction and explanation via temporal convolutional network. Proc Companion World Wide Web Conf, p.678-685. [26]Devlin J, Chang MW, Lee K, et al., 2019. BERT: pre-training of deep bidirectional transformers for language understanding. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.4171-4186. [27]Ding X, Zhang Y, Liu T, et al., 2016. Knowledge-driven event embedding for stock prediction. Proc COLING, the 26th Int Conf on Computational Linguistics: Technical Papers, p.2133-2142. [28]Ding YJ, Jia S, Ma TY, et al., 2023. Integrating stock features and global information via large language models for enhanced stock return prediction. https://arxiv.org/abs/2310.05627 [29]Dosovitskiy A, Beyer L, Kolesnikov A, et al., 2021. An image is worth 16×16 words: Transformers for image recognition at scale. Proc 9th Int Conf on Learning Representations. [30]Du X, Tanaka-Ishii K, 2020. Stock embeddings acquired from news articles and price history, and an application to portfolio optimization. Proc 58th Annual Meeting of the Association for Computational Linguistics, p.3353-3363. [31]Engle RF, 1982. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom inflation. Econometrica, 50(4):987-1007. [32]Engle RF, Granger CWJ, 1987. Co-integration and error correction: representation, estimation, and testing. Econometrica, 55(2):251-276. [33]Falkner S, Klein A, Hutter F, 2018. BOHB: robust and efficient hyperparameter optimization at scale. Proc 35th Int Conf on Machine Learning, p.1436-1445. [34]Fama EF, French KR, 1992. The cross-section of expected stock returns. J Finance, 47(2):427-465. [35]Fedus W, Zoph B, Shazeer N, 2022. Switch Transformers: scaling to trillion parameter models with simple and efficient sparsity. J Mach Learn Res, 23(1):120. [36]Feng FL, He XN, Wang X, et al., 2019. Temporal relational ranking for stock prediction. ACM Trans Inform Syst, 37(2):27. [37]Gessert F, Wingerath W, Friedrich S, et al., 2017. NoSQL database systems: a survey and decision guidance. Comput Sci Res Dev, 32(3-4):353-365. [38]Ghemawat S, Gobioff H, Leung ST, 2003. The Google File System. Proc 19th ACM Symp on Operating Systems Principles, p.29-43. [39]Grover A, Leskovec J, 2016. node2vec: scalable feature learning for networks. Proc 22nd ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining, p.855-864. [40]Guo K, Sun Y, Qian X, 2017. Can investor sentiment be used to predict the stock price? Dynamic analysis based on China stock market. Phys A Statist Mech Appl, 469:390-396. [41]Guo LB, Sun ZQ, Hu W, 2019. Learning to exploit long-term relational dependencies in knowledge graphs. Proc 36th Int Conf on Machine Learning, p.2505-2514. [42]Guo RC, Cheng L, Li JD, et al., 2021. A survey of learning causality with data: problems and methods. ACM Comput Surv, 53(4):75. [43]Han S, Pool J, Tran J, et al., 2015. Learning both weights and connections for efficient neural network. Proc 28th Int Conf on Neural Information Processing Systems, p.1135-1143. [44]Han S, Mao HZ, Dally WJ, 2016. Deep compression: compressing deep neural network with pruning, trained quantization and Huffman coding. Proc 4th Int Conf on Learning Representations. [45]Hayes-Roth F, Waterman DA, Lenat DB, 1983. Building Expert Systems. Addison-Wesley Longman Publishing Co., Boston, USA. [46]He X, Zhao KY, Chu XW, 2021. AutoML: a survey of the state-of-the-art. Knowl-Based Syst, 212:106622. [47]Hinton G, Vinyals O, Dean J, 2015. Distilling the knowledge in a neural network. https://arxiv.org/abs/1503.02531 [48]Hoeting JA, Madigan D, Raftery AE, et al., 1999. Bayesian model averaging: a tutorial. Statist Sci, 14(4):382-401. [49]Hornik K, Stinchcombe M, White H, 1989. Multilayer feedforward networks are universal approximators. Neur Netw, 2(5):359-366. [50]Hou K, 2007. Industry information diffusion and the lead-lag effect in stock returns. Rev Financ Stud, 20(4):1113-1138. [51]Hu ZN, Liu WQ, Bian J, et al., 2018. Listening to chaotic whispers: a deep learning framework for news-oriented stock trend prediction. Proc 11th ACM Int Conf on Web Search and Data Mining, p.261-269. [52]Imbens GW, Angrist JD, 1994. Identification and estimation of local average treatment effects. Econometrica, 62(2):467-475. [53]Jackson P, 1998. Introduction to Expert Systems. Addison-Wesley, Boston, USA. [54]Ji SX, Pan SR, Cambria E, et al., 2022. A survey on knowledge graphs: representation, acquisition, and applications. IEEE Trans Neur Netw Learn Syst, 33(2):494-514. [55]Jin HF, Song QQ, Hu X, 2019. Auto-Keras: an efficient neural architecture search system. Proc 25th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining, p.1946-1956. [56]Jin Y, Fu WL, Kang J, et al., 2020. Bayesian symbolic regression. https://arxiv.org/abs/1910.08892 [57]Kakushadze Z, 2016. 101 formulaic alphas. https://arxiv.org/abs/1601.00991 [58]Kaliyar RK, 2015. Graph databases: a survey. Proc Int Conf on Computing, Communication & Automation, p.785-790. [59]Karpukhin V, Oguz B, Min S, et al., 2020. Dense passage retrieval for open-domain question answering. Proc Conf on Empirical Methods in Natural Language Processing, p.6769-6781. [60]Kaya M, Bilge HS, 2019. Deep metric learning: a survey. Symmetry, 11(9):1066. [61]Kendall EF, McGuinness DL, Ding Y, 2019. Ontology Engineering. Morgan & Claypool Publishers, San Rafael, California, USA. [62]Klein A, Falkner S, Bartels S, et al., 2017. Fast Bayesian optimization of machine learning hyperparameters on large datasets. Proc 20th Int Conf on Artificial Intelligence and Statistics, p.528-536. [63]Kulis B, 2013. Metric learning: a survey. Found Trends Mach Learn, 5(4):287-364. [64]La Cava WG, Orzechowski P, Burlacu B, et al., 2021. Contemporary symbolic regression methods and their relative performance. Proc 1st Neural Information Processing Systems Track on Datasets and Benchmarks. [65]LeCun Y, Bengio Y, Hinton G, 2015. Deep learning. Nature, 521(7553):436-444. [66]Lehmann F, 1992. Semantic networks. Comput Math Appl, 23(2-5):1-50. [67]Lewis P, Perez E, Piktus A, et al., 2020. Retrieval-augmented generation for knowledge-intensive NLP tasks. Proc 34th Int Conf on Neural Information Processing Systems, Article 793. [68]Li MZ, Liu Y, Liu XY, et al., 2021. The deep learning compiler: a comprehensive survey. IEEE Trans Parall Distrib Syst, 32(3):708-727. [69]Li W, Bao RH, Harimoto K, et al., 2020. Modeling the stock relation with graph network for overnight stock movement prediction. Proc 29th Int Joint Conf on Artificial Intelligence, p.4541-4547. [70]Li YL, Wang TC, Sun BQ, et al., 2022. Detecting the lead–lag effect in stock markets: definition, patterns, and investment strategies. Finance Innov, 8(1):51. [71]Liu CX, Zoph B, Neumann M, et al., 2018. Progressive neural architecture search. Proc 15th European Conf on Computer Vision, p.19-35. [72]Liu HX, Simonyan K, Yang YM, 2019. DARTS: differentiable architecture search. Proc 7th Int Conf on Learning Representations. [73]Liu JS, Shen ZY, He Y, et al., 2021. Towards out-of-distribution generalization: a survey. https://arxiv.org/abs/2108.13624 [74]Liu PF, Yuan WZ, Fu JL, et al., 2023. Pre-train, prompt, and predict: a systematic survey of prompting methods in natural language processing. ACM Comput Surv, 55(9):195. [75]Liu XY, Wang GX, Yang HY, et al., 2023. FinGPT: democratizing Internet-scale data for financial large language models. https://arxiv.org/abs/2307.10485 [76]Long JW, Chen ZP, He WB, et al., 2020. An integrated framework of deep learning and knowledge graph for prediction of stock price trend: an application in Chinese stock exchange market. Appl Soft Comput, 91:106205. [77]Lopez-Lira A, Tang YH, 2023. Can ChatGPT forecast stock price movements? Return predictability and large language models. https://arxiv.org/abs/2304.07619 [78]Lu Y, Cheng J, Yan D, et al., 2014. Large-scale distributed graph computing systems: an experimental evaluation. Proc VLDB Endow, 8(3):281-292. [79]Luo YF, Wang MS, Zhou H, et al., 2019. AutoCross: automatic feature crossing for tabular data in real-world applications. Proc 25th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining, p.1936-1945. [80]Markowitz H, 1952. Portfolio selection. J Finance, 7(1):77-91. [81]McNeill D, 1993. Fuzzy Logic. Simon & Schuster, New York, USA. [82]Minsky M, 1974. A Framework for Representing Knowledge. MIT-AI Laboratory Memo 306. [83]MSCI, 1996. Barra’s Risk Models. https://www.msci.com/research-paper/barra-s-risk-models/014972229 [Accessed on Nov. 10, 2022]. [84]Müllner D, 2011. Modern hierarchical, agglomerative clustering algorithms. https://arxiv.org/abs/1109.2378 [85]Murdoch WJ, Singh C, Kumbier K, et al., 2019. Definitions, methods, and applications in interpretable machine learning. Proc Natl Acad Sci USA, 116(44):22071-22080. [86]Namiot D, 2015. Time series databases. Proc XVII Int Conf on Data Analytics and Management in Data Intensive Domains, p.132-137. [87]Nevmyvaka Y, Feng Y, Kearns M, 2006. Reinforcement learning for optimized trade execution. Proc 23rd Int Conf on Machine Learning, p.673-680. [88]Ng R, Subrahmanian VS, 1992. Probabilistic logic programming. Inform Comput, 101(2):150-201. [89]OpenAI, 2023. GPT-4 technical report. https://arxiv.org/abs/2303.08774 [90]Pearl J, Mackenzie D, 2018. The Book of Why: the New Science of Cause and Effect. Basic Books, Inc., New York, USA. [91]Radford A, Kim JW, Hallacy C, et al., 2021. Learning transferable visual models from natural language supervision. Proc 38th Int Conf on Machine Learning, p.8748-8763. [92]Rajpurkar P, Zhang J, Lopyrev K, et al., 2016. SQuAD: 100,000+ questions for machine comprehension of text. Proc Conf on Empirical Methods in Natural Language Processing, p.2383-2392. [93]Ramesh A, Pavlov M, Goh G, et al., 2021. Zero-shot text-to-image generation. Proc 38th Int Conf on Machine Learning, p.8821-8831. [94]Rashid A, Fayyaz M, Karim M, 2019. Investor sentiment, momentum, and stock returns: an examination for direct and indirect effects. Econ Res-Ekon Istraž, 32(1):2638-2656. [95]Real E, Moore S, Selle A, et al., 2017. Large-scale evolution of image classifiers. Proc 34th Int Conf on Machine Learning, p.2902-2911. [96]Real E, Aggarwal A, Huang YP, et al., 2019. Regularized evolution for image classifier architecture search. Proc 33rd AAAI Conf on Artificial Intelligence, p.4780-4789. [97]Richardson M, Domingos P, 2006. Markov logic networks. Mach Learn, 62(1-2):107-136. [98]Rocktäschel T, Riedel S, 2017. End-to-end differentiable proving. Proc 31st Int Conf on Neural Information Processing Systems, p.3791-3803. [99]Sakalauskas V, Kriksciuniene D, 2009. Research of the calendar effects in stock returns. Proc Int Conf on Business Information Systems, p.69-78. [100]Samuelson PA, 1965. Proof that properly anticipated prices fluctuate randomly. IMR, 6(2):41. [101]Sawhney R, Agarwal S, Wadhwa A, et al., 2020. Spatiotemporal hypergraph convolution network for stock movement forecasting. Proc IEEE Int Conf on Data Mining, p.482-491. [102]Schapire RE, 1990. The strength of weak learnability. Mach Learn, 5(2):197-227. [103]Schölkopf B, Locatello F, Bauer S, et al., 2021. Toward causal representation learning. Proc IEEE, 109(5):612-634. [104]Shapley LS, 1953. A value for n-person games. In: Kuhn HW, Tucker AW (Eds.), Contributions to the Theory of Games (AM-28), Volume II. Princeton University Press, Princeton, USA. [105]Shoeybi M, Patwary M, Puri R, et al., 2020. Megatron-LM: training multi-billion parameter language models using model parallelism. https://arxiv.org/abs/1909.08053 [106]Shvachko K, Kuang HR, Radia S, et al., 2010. The Hadoop Distributed File System. Proc IEEE 26th Symp on Mass Storage Systems and Technologies, p.1-10. [107]Sil A, Yates A, 2013. Re-ranking for joint named-entity recognition and linking. Proc 22nd ACM Int Conf on Information & Knowledge Management, p.2369-2374. [108]Sims CA, 1980. Macroeconomics and reality. Econometrica, 48(1):1-48. [109]Socher R, Chen DQ, Manning CD, et al., 2013. Reasoning with neural tensor networks for knowledge base completion. Proc 26th Int Conf on Neural Information Processing Systems, p.926-934. [110]Sowa JF, 1992. Semantic Networks. http://www.jfsowa.com/pubs/semnet.htm [Accessed on Nov. 16, 2022]. [111]Steinert R, Altmann S, 2023. Linking microblogging sentiments to stock price movement: an application of GPT-4. https://arxiv.org/abs/2308.16771 [112]Sumers TR, Yao SY, Narasimhan K, et al., 2023. Cognitive architectures for language agents. https://arxiv.org/abs/2309.02427 [113]Sutskever I, Vinyals O, Le QV, 2014. Sequence to sequence learning with neural networks. Proc 27th Int Conf on Neural Information Processing Systems, p.3104-3112. [114]Tan KL, Cai QC, Ooi BC, et al., 2015. In-memory databases: challenges and opportunities from software and hardware perspectives. ACM SIGMOD Rec, 44(2):35-40. [115]Thakkar A, Chaudhari K, 2021. A comprehensive survey on deep neural networks for stock market: the need, challenges, and future directions. Expert Syst Appl, 177:114800. [116]Touvron H, Martin L, Stone K, et al., 2023. Llama 2: open foundation and fine-tuned chat models. https://arxiv.org/abs/2307.09288 [117]Trouillon T, Welbl J, Riedel S, et al., 2016. Complex embeddings for simple link prediction. Proc 33rd Int Conf on Machine Learning, p.2071-2080. [118]Tsang M, Cheng DH, Liu HP, et al., 2020. Feature interaction interpretability: a case for explaining ad-recommendation systems via neural interaction detection. Proc 8th Int Conf on Learning Representations. [119]Tulchinsky I, 2019. Introduction to alpha design. In: Tulchinsky I (Ed.), Finding Alphas: a Quantitative Approach to Building Trading Strategies. Wiley, Chichester, UK. [120]VanderWeele TJ, Shpitser I, 2013. On the definition of a confounder. Ann Statist, 41(1):196-220. [121]Wang J, Zhang H, Bonne G, 2021. Machine Learning Factors: Capturing Non Linearities in Linear Factor Models. https://www.msci.com/www/research-report/machine-learning-factors/02410413451 [Accessed on Nov. 16, 2022]. [122]Wang JY, Zhang Y, Tang K, et al., 2019. AlphaStock: a buying-winners-and-selling-losers investment strategy using interpretable deep reinforcement attention networks. Proc 25th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining, p.1900-1908. [123]Wang QW, Xu ZH, Chen ZT, et al., 2021. Visual analysis of discrimination in machine learning. IEEE Trans Vis Comput Graph, 27(2):1470-1480. [124]Wang SZ, Cao JN, Yu PS, 2022. Deep learning for spatio-temporal data mining: a survey. IEEE Trans Knowl Data Eng, 34(8):3681-3700. [125]Wang SZ, Yuan H, Zhou L, et al., 2023. Alpha-GPT: human-AI interactive alpha mining for quantitative investment. https://arxiv.org/abs/2308.00016 [126]Wang WH, Bao HB, Dong L, et al., 2022. Image as a foreign language: BEiT pretraining for all vision and vision-language tasks. https://arxiv.org/abs/2208.10442 [127]Wang WY, Cohen WW, 2016. Learning first-order logic embeddings via matrix factorization. Proc 25th Int Joint Conf on Artificial Intelligence, p.2132-2138. [128]Wang YZ, Wang HZ, He JW, et al., 2021. TAGAT: Type-Aware Graph Attention neTworks for reasoning over knowledge graphs. Knowl-Based Syst, 233:107500. [129]Wang ZC, Huang BW, Tu SK, et al., 2021. DeepTrader: a deep reinforcement learning approach for risk-return balanced portfolio management with market conditions embedding. Proc 35th AAAI Conf on Artificial Intelligence, p.643-650. [130]Wei J, Bosma M, Zhao V, et al., 2022. Finetuned language models are zero-shot learners. Proc 10th Int Conf on Learning Representations. [131]Weng LL, 2023. LLM Powered Autonomous Agents. https://lilianweng.github.io/posts/2023-06-23-agent/ [Accessed on July 29, 2023]. [132]Wolpert DH, 1992. Stacked generalization. Neur Netw, 5(2):241-259. [133]Wu SJ, Irsoy O, Lu S, et al., 2023. BloombergGPT: a large language model for finance. https://arxiv.org/abs/2303.17564 [134]Wu YF, Mahfouz M, Magazzeni D, et al., 2021. How robust are limit order book representations under data perturbation? https://arxiv.org/abs/2110.04752 [135]Xiao H, Huang ML, Zhu XY, 2016. From one point to a manifold: knowledge graph embedding for precise link prediction. Proc 25th Int Joint Conf on Artificial Intelligence, p.1315-1321. [136]Xie QQ, Han WG, Zhang X, et al., 2023. PIXIU: a comprehensive benchmark, instruction dataset and large language model for finance. Proc 37th Conf on Neural Information Processing Systems. [137]Xu WT, Liu WQ, Xu C, et al., 2021. REST: relational event-driven stock trend forecasting. Proc Web Conf, p.1-10. [138]Yao LY, Chu ZX, Li S, et al., 2021. A survey on causal inference. ACM Trans Knowl Disc Data, 15(5):74. [139]Ying XT, Xu C, Gao JL, et al., 2020. Time-aware graph relational attention network for stock recommendation. Proc 29th ACM Int Conf on Information & Knowledge Management, p.2281-2284. [140]Yu XY, Liu TL, Wang XC, et al., 2017. On compressing deep models by low rank and sparse decomposition. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.67-76. [141]Zaharia M, Chowdhury M, Franklin MJ, et al., 2010. Spark: Cluster Computing with Working Sets. https://www.usenix.org/conference/hotcloud-10/spark-cluster-computing-working-sets [Accessed on Nov. 11, 2022]. [142]Zaharia M, Chowdhury M, Das T, et al., 2012. Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing. Proc 9th USENIX Symp on Networked Systems Design and Implementation. [143]Zhang HH, Hua FR, Xu CJ, et al., 2023. Unveiling the potential of sentiment: can large language models predict Chinese stock price movements? https://arxiv.org/abs/2306.14222 [144]Zhang XY, Zou JH, Ming X, et al., 2015. Efficient and accurate approximations of nonlinear convolutional networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1984-1992. [145]Zheng A, Casari A, 2018. Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists. O’Reilly, Boston, USA. [146]Zhu YQ, Xu WZ, Zhang JH, et al., 2021. A survey on graph structure learning: progress and opportunities. https://arxiv.org/abs/2103.03036 [147]Zoph B, Le QV, 2017. Neural architecture search with reinforcement learning. Proc 5th Int Conf on Learning Representations. [148]Zoph B, Vasudevan V, Shlens J, et al., 2018. Learning transferable architectures for scalable image recognition. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.8697-8710. Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou
310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE |
Open peer comments: Debate/Discuss/Question/Opinion
<1>