CLC number: TN929.5
On-line Access: 2025-03-07
Received: 2024-03-30
Revision Accepted: 2024-07-25
Crosschecked: 2025-03-07
Cited: 0
Clicked: 960
Tianjiao CHEN, Xiaoyun WANG, Meihui HUA, Qinqin TANG. Incentive-based task offloading for digital twins in 6G native artificial intelligence networks: a learning approach[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(2): 214-229.
@article{title="Incentive-based task offloading for digital twins in 6G native artificial intelligence networks: a learning approach",
author="Tianjiao CHEN, Xiaoyun WANG, Meihui HUA, Qinqin TANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="2",
pages="214-229",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2400240"
}
%0 Journal Article
%T Incentive-based task offloading for digital twins in 6G native artificial intelligence networks: a learning approach
%A Tianjiao CHEN
%A Xiaoyun WANG
%A Meihui HUA
%A Qinqin TANG
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 2
%P 214-229
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2400240
TY - JOUR
T1 - Incentive-based task offloading for digital twins in 6G native artificial intelligence networks: a learning approach
A1 - Tianjiao CHEN
A1 - Xiaoyun WANG
A1 - Meihui HUA
A1 - Qinqin TANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 2
SP - 214
EP - 229
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2400240
Abstract: A communication network can natively provide artificial intelligence (AI) training services for resource-limited network entities to quickly build accurate digital twins and achieve high-level network autonomy. Considering that network entities that require digital twins and those that provide AI services may belong to different operators, incentive mechanisms are needed to maximize the utility of both. In this paper, we establish a stackelberg game to model AI training task offloading for digital twins in native AI networks with the operator with base stations as the leader and resource-limited network entities as the followers. We analyze the Stackelberg equilibrium to obtain equilibrium solutions. Considering the time-varying wireless network environment, we further design a deep reinforcement learning algorithm to achieve dynamic pricing and task offloading. Finally, extensive simulations are conducted to verify the effectiveness of our proposal.
[1]6GANA, 2022. Ten Questions of 6G Native AI Network Architecture (White Paper). https://6g-ana.com/About.aspx?ClassID=29 [Accessed on Feb. 27, 2024].
[2]Alexopoulos K, Nikolakis N, Chryssolouris G, 2020. Digital twin-driven supervised machine learning for the development of artificial intelligence applications in manufacturing. Int J Comput Integr Manuf, 33(5):429-439.
[3]Chen TJ, Deng J, Tang QQ, et al., 2023. Optimization of quality of AI service in 6G native AI wireless networks. Electronics, 12(15):3306.
[4]China Mobile Research Institute, 2022. 6G Native AI Architecture and Technologies (White Paper). https://mip.sgpjbg.com/baogao/109850.html [Accessed on Feb. 27, 2024].
[5]Du J, Jiang CX, Benslimane A, et al., 2022. SDN-based resource allocation in edge and cloud computing systems: an evolutionary Stackelberg differential game approach. IEEE/ACM Trans Netw, 30(4):1613-1628.
[6]Groshev M, Guimarães C, Martín-Pérez J, et al., 2021. Toward intelligent cyber-physical systems: digital twin meets artificial intelligence. IEEE Commun Mag, 59(8):14-20.
[7]Haarnoja T, Zhou A, Abbeel P, et al., 2018. Soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. Proc 35th Int Conf on Machine Learning, p.1861-1870.
[8]Hossain AR, Liu WQ, Ansari N, et al., 2023. AI-native for 6G core network configuration. IEEE Netw Lett, 5(4):255-259.
[9]Jiang L, Zheng H, Tian H, et al., 2022. Cooperative federated learning and model update verification in blockchain-empowered digital twin edge networks. IEEE Int Things J, 9(13):11154-11167.
[10]Lin RP, Xie TZ, Luo S, et al., 2022. Energy-efficient computation offloading in collaborative edge computing. IEEE Int Things J, 9(21):21305-21322.
[11]Liu GY, Deng J, Zheng QB, et al., 2022. Native intelligence for 6G mobile network: technical challenges, architecture and key features. J China Univ Posts Telecommun, 29(1):27-40.
[12]Liu QH, Tang L, Wu T, et al., 2023. Deep reinforcement learning for resource demand prediction and virtual function network migration in digital twin network. IEEE Int Things J, 10(21):19102-19116.
[13]Liu SC, Li LY, Zhang L, et al., 2024. Game theory based dynamic event-driven service scheduling in cloud manufacturing. IEEE Trans Autom Sci Eng, 21(1):618-629.
[14]Lu YL, Maharjan S, Zhang Y, 2021a. Adaptive edge association for wireless digital twin networks in 6G. IEEE Int Things J, 8(22):16219-16230.
[15]Lu YL, Huang XH, Zhang K, et al., 2021b. Communication-efficient federated learning for digital twin edge networks in industrial IoT. IEEE Trans Ind Inform, 17(8):5709-5718.
[16]Lv ZH, Lou RR, 2022. Edge-fog-cloud secure storage with deep-learning-assisted digital twins. IEEE Int Things Mag, 5(2):36-40.
[17]Mihai S, Yaqoob M, Hung DV, et al., 2022. Digital twins: a survey on enabling technologies, challenges, trends and future prospects. IEEE Commun Surv Tutor, 24(4):2255-2291.
[18]Nguyen DC, Ding M, Pathirana PN, et al., 2022. 6G Internet of Things: a comprehensive survey. IEEE Int Things J, 9(1):359-383.
[19]Nie GF, Zhang JH, Zhang YX, et al., 2022. A predictive 6G network with environment sensing enhancement: from radio wave propagation perspective. China Commun, 19(6):105-122.
[20]Peng HX, Shen XM, 2020. Deep reinforcement learning based resource management for multi-access edge computing in vehicular networks. IEEE Trans Netw Sci Eng, 7(4):2416-2428.
[21]Qi YL, Zhou YQ, Liu YF, et al., 2021. Traffic-aware task offloading based on convergence of communication and sensing in vehicular edge computing. IEEE Int Things J, 8(24):17762-17777.
[22]Shen XM, Gao J, Wu W, et al., 2020. AI-assisted network-slicing based next-generation wireless networks. IEEE Open J Veh Technol, 1:45-66.
[23]Shi D, Li L, Ohtsuki T, et al., 2022. Make smart decisions faster: deciding D2D resource allocation via Stackelberg game guided multi-agent deep reinforcement learning. IEEE Trans Mob Comput, 21(12):4426-4438.
[24]Tang QQ, Xie RC, Yu FR, et al., 2022. Distributed task scheduling in serverless edge computing networks for the Internet of Things: a learning approach. IEEE Int Things J, 9(20):19634-19648.
[25]Tang QQ, Xie RC, Fang ZR, et al., 2024. Joint service deployment and task scheduling for satellite edge computing: a two-timescale hierarchical approach. IEEE J Sel Areas Commun, 42(5):1063-1079.
[26]Wang CX, You XH, Gao XQ, et al., 2023. On the road to 6G: visions, requirements, key technologies, and testbeds. IEEE Commun Surv Tutor, 25(2):905-974.
[27]Wu W, Zhou CH, Li MS, et al., 2022. AI-native network slicing for 6G networks. IEEE Wirel Commun, 29(1):96-103.
[28]Wu W, Li MS, Qu KG, et al., 2023. Split learning over wireless networks: parallel design and resource management. IEEE J Sel Areas Commun, 41(4):1051-1066.
[29]Xiong ZH, Zhang Y, Niyato D, et al., 2019. Deep reinforcement learning for mobile 5G and beyond: fundamentals, applications, and challenges. IEEE Veh Technol Mag, 14(2):44-52.
[30]Yang Y, Wu JJ, Chen TJ, et al., 2024. Task-oriented 6G native-AI network architecture. IEEE Netw, 38(1):219-227.
[31]Yang ZH, Chen MZ, Saad W, et al., 2021. Energy efficient federated learning over wireless communication networks. IEEE Trans Wirel Commun, 20(3):1935-1949.
[32]Yao ZX, Xia SC, Li Y, et al., 2023. Cooperative task offloading and service caching for digital twin edge networks: a graph attention multi-agent reinforcement learning approach. IEEE J Sel Areas Commun, 41(11):3401-3413.
[33]Zeng Y, Pou J, Sun CJ, et al., 2023. Autonomous input voltage sharing control and triple phase shift modulation method for ISOP-DAB converter in DC microgrid: a multiagent deep reinforcement learning-based method. IEEE Trans Power Electron, 38(3):2985-3000.
[34]Zhang HJ, Ma X, Liu XN, et al., 2023. GNN-based power allocation and user association in digital twin network for the terahertz band. IEEE J Sel Areas Commun, 41(10):3111-3121.
[35]Zhang JH, Lin JX, Tang P, et al., 2024. Deterministic ray tracing: a promising approach to THz channel modeling in 6G deployment scenarios. IEEE Commun Mag, 62(2):48-54.
[36]Zhao N, Ye ZY, Pei YY, et al., 2022. Multi-agent deep reinforcement learning for task offloading in UAV-assisted mobile edge computing. IEEE Trans Wirel Commun, 21(9):6949-6960.
[37]Zhou YQ, Liu L, Wang L, et al., 2020. Service-aware 6G: an intelligent and open network based on the convergence of communication, computing and caching. Dig Commun Netw, 6(3):253-260.
[38]Zhu XY, Luo YY, Liu AF, et al., 2022. A deep reinforcement learning-based resource management game in vehicular edge computing. IEEE Trans Intell Transp Syst, 23(3):2422-2433.
Open peer comments: Debate/Discuss/Question/Opinion
<1>