CLC number:
On-line Access: 2025-01-25
Received: 2024-04-16
Revision Accepted: 2024-12-03
Crosschecked: 0000-00-00
Cited: 0
Clicked: 22
Siyao SONG, Guoao SUN, Yifan CHANG, Nengwen ZHAO†‡, Yijun YU. TSNet: a foundation model for wireless network status prediction in digital twin[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .
@article{title="TSNet: a foundation model for wireless network status prediction in digital twin",
author="Siyao SONG, Guoao SUN, Yifan CHANG, Nengwen ZHAO†‡, Yijun YU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="-1",
number="-1",
pages="",
year="1998",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2400295"
}
%0 Journal Article
%T TSNet: a foundation model for wireless network status prediction in digital twin
%A Siyao SONG
%A Guoao SUN
%A Yifan CHANG
%A Nengwen ZHAO†‡
%A Yijun YU
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%P
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2400295
TY - JOUR
T1 - TSNet: a foundation model for wireless network status prediction in digital twin
A1 - Siyao SONG
A1 - Guoao SUN
A1 - Yifan CHANG
A1 - Nengwen ZHAO†‡
A1 - Yijun YU
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2400295
Abstract: Predicting future network status is a key capability in digital twin networks and can assist operators in estimating network performance and taking proactive measures in advance. Existing methods, including statistical methods, machine learning-based methods, and deep learning-based methods, suffer from several limitations in generalization ability and data dependency. To overcome these drawbacks, inspired by the success of pretraining and the fine-tuning framework in the natural language processing and computer vision domains, in this work, we propose TSNet, a transformer-based foundation model for predicting various network status measurements in digital twins. To adapt transformer architecture to time series data, frequency learning attention and time series decomposition blocks are implemented. We also design a fine-tuning strategy to enable TSNet to adapt to new data or scenarios. Experiments demonstrate that zero-shot prediction using TSNet without any training data performs better than fully supervised baseline methods and achieves higher accuracy. In addition, the prediction accuracy could be further improved by equipping the proposed model with fine-tuning. Overall, TSNet exhibits strong capability and accuracy in various datasets. For future work, we plan to explore TSNetâĂŹs generalization abilities on various downstream tasks (such as anomaly detection and pattern matching) using few-shot learning approaches
Open peer comments: Debate/Discuss/Question/Opinion
<1>