CLC number:
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 0000-00-00
Cited: 0
Clicked: 371
Tianjiao CHEN, Xiaoyun WANG, Meihui HUA, Qinqin TANG. Incentive-based task offloading for digital twin in 6G native AI networks: al earning approach[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .
@article{title="Incentive-based task offloading for digital twin in 6G native AI networks: al earning approach",
author="Tianjiao CHEN, Xiaoyun WANG, Meihui HUA, Qinqin TANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="-1",
number="-1",
pages="",
year="1998",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2400240"
}
%0 Journal Article
%T Incentive-based task offloading for digital twin in 6G native AI networks: al earning approach
%A Tianjiao CHEN
%A Xiaoyun WANG
%A Meihui HUA
%A Qinqin TANG
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%P
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2400240
TY - JOUR
T1 - Incentive-based task offloading for digital twin in 6G native AI networks: al earning approach
A1 - Tianjiao CHEN
A1 - Xiaoyun WANG
A1 - Meihui HUA
A1 - Qinqin TANG
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2400240
Abstract: A communication network can natively provide artificial intelligence (AI) training services for resourcelimited network entities to quickly build accurate digital twins and achieve high-level network autonomy. Considering that network entities that require digital twins and those that provide AI services may belong to different operators, incentive mechanisms are needed to maximize the utility of both. In this paper, we establish a stackelberg game to model the AI training task offloading for digital twins in native AI networks with the operator of base stations as the leader and resource-limited network entities as the followers. We analyze the Stackelberg equilibrium to obtain equilibrium solutions. Considering the time-varying wireless network environment, we further design a deep reinforcement learning algorithm to achieve dynamic pricing and task offloading. Finally, extensive simulation experiments are conducted to verify the effectiveness of our proposal.
Open peer comments: Debate/Discuss/Question/Opinion
<1>