CLC number:
On-line Access: 2025-03-07
Received: 2024-04-16
Revision Accepted: 2024-12-03
Crosschecked: 2025-03-07
Cited: 0
Clicked: 156
Siyao SONG, Guoao SUN, Yifan CHANG, Nengwen ZHAO, Yijun YU. TSNet: a foundation model for wireless network status prediction in digital twins[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(2): 301-308.
@article{title="TSNet: a foundation model for wireless network status prediction in digital twins",
author="Siyao SONG, Guoao SUN, Yifan CHANG, Nengwen ZHAO, Yijun YU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="2",
pages="301-308",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2400295"
}
%0 Journal Article
%T TSNet: a foundation model for wireless network status prediction in digital twins
%A Siyao SONG
%A Guoao SUN
%A Yifan CHANG
%A Nengwen ZHAO
%A Yijun YU
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 2
%P 301-308
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2400295
TY - JOUR
T1 - TSNet: a foundation model for wireless network status prediction in digital twins
A1 - Siyao SONG
A1 - Guoao SUN
A1 - Yifan CHANG
A1 - Nengwen ZHAO
A1 - Yijun YU
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 2
SP - 301
EP - 308
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2400295
Abstract: Predicting future network status is a key capability in digital twin networks and can assist operators in estimating network performance and taking proactive measures in advance. Existing methods, including statistical methods, machine learning based methods, and deep learning based methods, suffer from several limitations in generalization ability and data dependency. To overcome these drawbacks, inspired by the success of pretraining and the fine-tuning framework in the natural language processing (NLP) and computer vision (CV) domains, we propose TSNet, a Transformer-based foundation model for predicting various network status measurements in digital twins. To adapt the Transformer architecture to time-series data, frequency learning attention and time-series decomposition blocks are implemented. We also design a fine-tuning strategy to enable TSNet to adapt to new data or scenarios. Experiments demonstrate that zero-shot prediction using TSNet without any training data performs better than fully supervised baseline methods and achieves higher accuracy. In addition, the prediction accuracy could be further improved by equipping the proposed model with fine-tuning. Overall, TSNet exhibits strong capability and achieves high accuracy in various datasets.
[1]Breiman L, 2001. Random forests. Mach Learn, 45(1):5-32.
[2]Challu C, Olivares KG, Oreshkin BN, et al., 2022. N-HiTS: neural hierarchical interpolation for time-series forecasting.
[3]Chang C, Wang WY, Peng WC, et al., 2024. LLM4TS: aligning pre-trained LLMs as data-efficient time-series forecasters.
[4]Das A, Kong WH, Sen R, et al., 2024. A decoder-only foundation model for time-series forecasting.
[5]Houlsby N, Giurgiu A, Jastrzebski S, et al., 2019. Parameter-efficient transfer learning for NLP. Proc 36th Int Conf on Machine Learning, p.2790-2799.
[6]Hyndman RJ, Athanasopoulos G, 2018. Forecasting: Principles and Practice (2nd Ed.). OTexts, Melbourne, Australia.
[7]Khan LU, Han Z, Saad W, et al., 2022. Digital twin of wireless systems: overview, taxonomy, challenges, and opportunities. IEEE Commun Surv Tut, 24(4):2230-2254.
[8]Lin TY, Wang YX, Liu XY, et al., 2022. A survey of Transformers. AI Open, 3:111-132.
[9]Liu S, Yu H, Liao C, et al., 2022. Pyraformer: low-complexity pyramidal attention for long-range time-series modeling and forecasting. 10th Int Conf on Learning Representations.
[10]Liu Y, Wu HX, Wang JM, 2022. Non-stationary Transformers: exploring the stationarity in time-series forecasting. Proc 36th Int Conf on Neural Information Processing Systems, p.9881-9893.
[11]Liu YJ, Dong HB, Wang XM, et al., 2019. Time-series prediction based on temporal convolutional network. IEEE/ACIS 18th Int Conf on Computer and Information Science, p.300-305.
[12]Lu K, Grover A, Abbeel P, et al., 2022. Frozen pretrained Transformers as universal computation engines. Proc 36th AAAI Conf on Artificial Intelligence, p.7628-7636.
[13]Makridakis S, Spiliotis E, Assimakopoulos V, 2018. The M4 competition: results, findings, conclusion and way forward. Int J Forecast, 34(4):802-808.
[14]Mehrmolaei S, Keyvanpour MR, 2016. Time-series forecasting using improved ARIMA. Artificial Intelligence and Robotics (IRANOPEN), p.92-97.
[15]Nie YQ, Nguyen NH, Sinthong P, et al., 2023. A time-series is worth 64 words: long-term forecasting with Transformers.
[16]Paaß G, Giesselbach S, 2023. Foundation Models for Natural Language Processing: Pre-trained Language Models Integrating Media. Springer, Cham.
[17]Rebuffi SA, Bilen H, Vedaldi A, 2017. Learning multiple visual domains with residual adapters. Proc 31st Int Conf on Neural Information Processing Systems, p.506-516.
[18]Salinas D, Flunkert V, Gasthaus J, et al., 2020. DeepAR: probabilistic forecasting with autoregressive recurrent networks. Int J Forecast, 36(3):1181-1191.
[19]Wu HX, Hu TG, Liu Y, et al., 2023. TimesNet: temporal 2D-variation modeling for general time-series analysis.
[20]Zhou HY, Zhang SH, Peng JQ, et al., 2021. Informer: beyond efficient Transformer for long sequence time-series forecasting.
[21]Zhou T, Ma ZQ, Wen QS, et al., 2022. FEDformer: frequency enhanced decomposed Transformer for long-term series forecasting. Proc 39th Int Conf on Machine Learning, p.27268-27286.
[22]Zhou T, Niu PS, Wang X, et al., 2023. One fits all: power general time-series analysis by pretrained LM.
Open peer comments: Debate/Discuss/Question/Opinion
<1>