Full Text:   <125>

Summary:  <15>

CLC number: 

On-line Access: 2025-03-07

Received: 2024-04-16

Revision Accepted: 2024-12-03

Crosschecked: 2025-03-07

Cited: 0

Clicked: 156

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Nengwen ZHAO

https://orcid.org/0009-0002-7027-9978

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2025 Vol.26 No.2 P.301-308

http://doi.org/10.1631/FITEE.2400295


TSNet: a foundation model for wireless network status prediction in digital twins


Author(s):  Siyao SONG, Guoao SUN, Yifan CHANG, Nengwen ZHAO, Yijun YU

Affiliation(s):  Huawei Technologies Co., Ltd., Shenzhen 518129, China

Corresponding email(s):   zhaonengwen@huawei.com

Key Words: 


Share this article to: More <<< Previous Article|

Siyao SONG, Guoao SUN, Yifan CHANG, Nengwen ZHAO, Yijun YU. TSNet: a foundation model for wireless network status prediction in digital twins[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(2): 301-308.

@article{title="TSNet: a foundation model for wireless network status prediction in digital twins",
author="Siyao SONG, Guoao SUN, Yifan CHANG, Nengwen ZHAO, Yijun YU",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="2",
pages="301-308",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2400295"
}

%0 Journal Article
%T TSNet: a foundation model for wireless network status prediction in digital twins
%A Siyao SONG
%A Guoao SUN
%A Yifan CHANG
%A Nengwen ZHAO
%A Yijun YU
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 2
%P 301-308
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2400295

TY - JOUR
T1 - TSNet: a foundation model for wireless network status prediction in digital twins
A1 - Siyao SONG
A1 - Guoao SUN
A1 - Yifan CHANG
A1 - Nengwen ZHAO
A1 - Yijun YU
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 2
SP - 301
EP - 308
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2400295


Abstract: 
Predicting future network status is a key capability in digital twin networks and can assist operators in estimating network performance and taking proactive measures in advance. Existing methods, including statistical methods, machine learning based methods, and deep learning based methods, suffer from several limitations in generalization ability and data dependency. To overcome these drawbacks, inspired by the success of pretraining and the fine-tuning framework in the natural language processing (NLP) and computer vision (CV) domains, we propose TSNet, a Transformer-based foundation model for predicting various network status measurements in digital twins. To adapt the Transformer architecture to time-series data, frequency learning attention and time-series decomposition blocks are implemented. We also design a fine-tuning strategy to enable TSNet to adapt to new data or scenarios. Experiments demonstrate that zero-shot prediction using TSNet without any training data performs better than fully supervised baseline methods and achieves higher accuracy. In addition, the prediction accuracy could be further improved by equipping the proposed model with fine-tuning. Overall, TSNet exhibits strong capability and achieves high accuracy in various datasets.

TSNet:用于数字孪生无线网络状态预测的基础模型

宋思尧,孙国傲,常一帆,赵能文,于益俊
华为技术有限公司,中国深圳市,518129
摘要:预测网络未来状态是数字孪生网络的一个关键能力,可帮助运维人员估计网络性能变化,以提前采取相关操作。现有预测方法--包括统计方法、机器学习方法和深度学习方法--在泛化能力和训练数据依赖上存在诸多限制。为解决这些问题,受自然语言处理和计算机视觉领域预训练与微调框架启发,提出一个基于Transformer的基础模型TSNet,用于预测多样化的网络性能指标。为了利用Transformer架构更好地建模时间序列,引入频域注意力机制和时序分解。此外,设计了一种轻量的微调策略,使TSNet可以快速泛化到新数据或新场景。实验结果表明,基于零样本的TSNet预测(无需任何训练数据)表现优于有监督的基线方法。使用少样本的微调策略,预测准确性可进一步提升。整体而言,TSNet在多种数据上表现出较高的准确性和泛化能力。

关键词:数字孪生;通信网络;基础模型;网络状态预测

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Breiman L, 2001. Random forests. Mach Learn, 45(1):5-32.

[2]Challu C, Olivares KG, Oreshkin BN, et al., 2022. N-HiTS: neural hierarchical interpolation for time-series forecasting.

[3]Chang C, Wang WY, Peng WC, et al., 2024. LLM4TS: aligning pre-trained LLMs as data-efficient time-series forecasters.

[4]Das A, Kong WH, Sen R, et al., 2024. A decoder-only foundation model for time-series forecasting.

[5]Houlsby N, Giurgiu A, Jastrzebski S, et al., 2019. Parameter-efficient transfer learning for NLP. Proc 36th Int Conf on Machine Learning, p.2790-2799.

[6]Hyndman RJ, Athanasopoulos G, 2018. Forecasting: Principles and Practice (2nd Ed.). OTexts, Melbourne, Australia.

[7]Khan LU, Han Z, Saad W, et al., 2022. Digital twin of wireless systems: overview, taxonomy, challenges, and opportunities. IEEE Commun Surv Tut, 24(4):2230-2254.

[8]Lin TY, Wang YX, Liu XY, et al., 2022. A survey of Transformers. AI Open, 3:111-132.

[9]Liu S, Yu H, Liao C, et al., 2022. Pyraformer: low-complexity pyramidal attention for long-range time-series modeling and forecasting. 10th Int Conf on Learning Representations.

[10]Liu Y, Wu HX, Wang JM, 2022. Non-stationary Transformers: exploring the stationarity in time-series forecasting. Proc 36th Int Conf on Neural Information Processing Systems, p.9881-9893.

[11]Liu YJ, Dong HB, Wang XM, et al., 2019. Time-series prediction based on temporal convolutional network. IEEE/ACIS 18th Int Conf on Computer and Information Science, p.300-305.

[12]Lu K, Grover A, Abbeel P, et al., 2022. Frozen pretrained Transformers as universal computation engines. Proc 36th AAAI Conf on Artificial Intelligence, p.7628-7636.

[13]Makridakis S, Spiliotis E, Assimakopoulos V, 2018. The M4 competition: results, findings, conclusion and way forward. Int J Forecast, 34(4):802-808.

[14]Mehrmolaei S, Keyvanpour MR, 2016. Time-series forecasting using improved ARIMA. Artificial Intelligence and Robotics (IRANOPEN), p.92-97.

[15]Nie YQ, Nguyen NH, Sinthong P, et al., 2023. A time-series is worth 64 words: long-term forecasting with Transformers.

[16]Paaß G, Giesselbach S, 2023. Foundation Models for Natural Language Processing: Pre-trained Language Models Integrating Media. Springer, Cham.

[17]Rebuffi SA, Bilen H, Vedaldi A, 2017. Learning multiple visual domains with residual adapters. Proc 31st Int Conf on Neural Information Processing Systems, p.506-516.

[18]Salinas D, Flunkert V, Gasthaus J, et al., 2020. DeepAR: probabilistic forecasting with autoregressive recurrent networks. Int J Forecast, 36(3):1181-1191.

[19]Wu HX, Hu TG, Liu Y, et al., 2023. TimesNet: temporal 2D-variation modeling for general time-series analysis.

[20]Zhou HY, Zhang SH, Peng JQ, et al., 2021. Informer: beyond efficient Transformer for long sequence time-series forecasting.

[21]Zhou T, Ma ZQ, Wen QS, et al., 2022. FEDformer: frequency enhanced decomposed Transformer for long-term series forecasting. Proc 39th Int Conf on Machine Learning, p.27268-27286.

[22]Zhou T, Niu PS, Wang X, et al., 2023. One fits all: power general time-series analysis by pretrained LM.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE