Full Text:   <134>

Summary:  <66>

CLC number: 

On-line Access: 2024-02-19

Received: 2023-05-31

Revision Accepted: 2024-02-19

Crosschecked: 2024-01-03

Cited: 0

Clicked: 222

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Mingyuan BAI

https://orcid.org/0000-0002-2454-4219

Derun ZHOU

https://orcid.org/0009-0008-0931-4520

Qibin ZHAO

https://orcid.org/0000-0002-4442-3182

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2024 Vol.25 No.1 P.160-169

http://doi.org/10.1631/FITEE.2300392


TendiffPure: a convolutional tensor-train denoising diffusion model for purification


Author(s):  Mingyuan BAI, Derun ZHOU, Qibin ZHAO

Affiliation(s):  RIKEN AIP, Tokyo 1030027, Japan; more

Corresponding email(s):   mingyuan.bai@riken.jp, zhouderun2000@gmail.com, qibin.zhao@riken.jp

Key Words:  Diffusion models, Tensor decomposition, Image denoising


Mingyuan BAI, Derun ZHOU, Qibin ZHAO. TendiffPure: a convolutional tensor-train denoising diffusion model for purification[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(1): 160-169.

@article{title="TendiffPure: a convolutional tensor-train denoising diffusion model for purification",
author="Mingyuan BAI, Derun ZHOU, Qibin ZHAO",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="25",
number="1",
pages="160-169",
year="2024",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300392"
}

%0 Journal Article
%T TendiffPure: a convolutional tensor-train denoising diffusion model for purification
%A Mingyuan BAI
%A Derun ZHOU
%A Qibin ZHAO
%J Frontiers of Information Technology & Electronic Engineering
%V 25
%N 1
%P 160-169
%@ 2095-9184
%D 2024
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300392

TY - JOUR
T1 - TendiffPure: a convolutional tensor-train denoising diffusion model for purification
A1 - Mingyuan BAI
A1 - Derun ZHOU
A1 - Qibin ZHAO
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 25
IS - 1
SP - 160
EP - 169
%@ 2095-9184
Y1 - 2024
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300392


Abstract: 
diffusion models are effective purification methods, where the noises or adversarial attacks are removed using generative approaches before pre-existing classifiers conducting classification tasks. However, the efficiency of diffusion models is still a concern, and existing solutions are based on knowledge distillation which can jeopardize the generation quality because of the small number of generation steps. Hence, we propose TendiffPure as a tensorized and compressed diffusion model for purification. Unlike the knowledge distillation methods, we directly compress U-Nets as backbones of diffusion models using tensor-train decomposition, which reduces the number of parameters and captures more spatial information in multi-dimensional data such as images. The space complexity is reduced from O(N2) to O(NR2) with R≤4 as the tensor-train rank and N as the number of channels. Experimental results show that TendiffPure can more efficiently obtain high-quality purification results and outperforms the baseline purification methods on CIFAR-10, Fashion-MNIST, and MNIST datasets for two noises and one adversarial attack.

TendiffPure:一种用于纯化的卷积张量链去噪扩散模型

白名瑗1,周德润1,2,赵启斌1
1理化学研究所革新知能统合研究项目组,日本东京市,1030027
2东京工业大学環境社会理工学院,日本东京市,1528550
摘要:扩散模型是有效的纯化方法,在现有分类器执行分类任务之前,使用生成方法去除噪声或对抗性攻击。然而,扩散模型的效率仍然是一个问题,现有的解决方案基于知识蒸馏,由于生成步骤较少,可能会危及生成质量。因此,我们提出TendiffPure,一种用于纯化的张量化和压缩的扩散模型。与知识蒸馏方法不同,我们直接使用张量链分解压缩扩散模型的U-Net骨干网络,减少参数数量,并在多维数据(如图像)中捕获更多的空间信息。空间复杂度从O(N2)减少到O(NR2),其中R≤4为张量序列秩,N为通道数。实验结果表明,基于CIFAR-10、Fashion-MNIST和MNIST数据集,TendiffPure可以更有效地生成高质量的净化结果,并在两种噪声和一次对抗性攻击下优于基线纯化方法。

关键词:扩散模型;张量分解;图像去噪

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Carroll JD, Chang JJ, 1970. Analysis of individual differences in multidimensional scaling via an N-way generalization of “Eckart-Young” decomposition. Psychometrika, 35(3):283-319.

[2]Croce F, Hein M, 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. Proc 37th Int Conf on Machine Learning, Article 206.

[3]Dhariwal P, Nichol A, 2021. Diffusion models beat GANs on image synthesis. Proc 35th Conf on Neural Information Processing Systems, p.8780-8794.

[4]Gao Q, Li ZL, Zhang JP, et al., 2023. CoreDiff: contextual error-modulated generalized diffusion model for low-dose CT denoising and generalization. IEEE Trans Med Imag, early access.

[5]Giovannetti V, Montangero S, Fazio R, 2008. Quantum multiscale entanglement renormalization ansatz channels. Phys Rev Lett, 101(18):180503.

[6]Hitchcock FL, 1927. The expression of a tensor or a polyadic as a sum of products. J Math Phys, 6(1-4):164-189.

[7]Ho J, Salimans T, 2021. Classifier-free diffusion guidance. Proc Workshop on Deep Generative Models and Downstream Applications.

[8]Ho J, Jain A, Abbeel P, 2020. Denoising diffusion probabilistic models. Proc 34th Int Conf on Neural Information Processing Systems, Article 574.

[9]Hu EJ, Shen YL, Wallis P, et al., 2022. LoRA: low-rank adaptation of large language models. Proc 10th Int Conf on Learning Representations.

[10]Krizhevsky A, Hinton G, 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report. University of Toronto, Toronto, Canada.

[11]LeCun Y, Bottou L, Bengio Y, et al., 1998. Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278-2324.

[12]Li C, Sun Z, Yu JS, et al., 2019. Low-rank embedding of kernels in convolutional neural networks under random shuffling. Proc IEEE Int Conf on Acoustics, p.3022-3026.

[13]Luo YS, Zhao XL, Meng DY, et al., 2022. HLRTF: hierarchical low-rank tensor factorization for inverse problems in multi-dimensional imaging. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.19281-19290.

[14]Meng CL, Rombach R, Gao RQ, et al., 2023. On distillation of guided diffusion models. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.14297-14306.

[15]Nichol AQ, Dhariwal P, 2021. Improved denoising diffusion probabilistic models. Proc 38th Int Conf on Machine Learning, p.8162-8171.

[16]Nie WL, Guo B, Huang YJ, et al., 2022. Diffusion models for adversarial purification. Proc 39th Int Conf on Machine Learning, p.16805-16827.

[17]Oseledets IV, 2011. Tensor-train decomposition. SIAM J Sci Comput, 33(5):2295-2317.

[18]Ronneberger O, Fischer P, Brox T, 2015. U-Net: convolutional networks for biomedical image segmentation. Proc 18th Int Conf on Medical Image Computing and Computer-Assisted Intervention, p.234-241.

[19]Song JM, Meng CL, Ermon S, 2021. Denoising diffusion implicit models. Proc 9th Int Conf on Learning Representations.

[20]Song Y, Garg S, Shi JX, et al., 2020. Sliced score matching: a scalable approach to density and score estimation. Proc 35th Uncertainty in Artificial Intelligence Conf, p.574-584.

[21]Song Y, Dhariwal P, Chen M, et al., 2023. Consistency models. Proc 40th Int Conf on Machine Learning, Article 1335.

[22]Su JH, Byeon W, Kossaifi J, et al., 2020. Convolutional tensor-train LSTM for spatio-temporal learning. Proc 34th Int Conf on Neural Information Processing Systems, Article 1150.

[23]Tucker LR, 1966. Some mathematical notes on three-mode factor analysis. Psychometrika, 31(3):279-311.

[24]Vahdat A, Kreis K, Kautz J, 2021. Score-based generative modeling in latent space. Proc 35th Conf on Neural Information Processing Systems.

[25]Vincent P, 2011. A connection between score matching and denoising autoencoders. Neur Comput, 23(7):1661-1674.

[26]Xiao H, Rasul K, Vollgraf R, 2017. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. https://arxiv.org/abs/1708.07747

[27]Zhao QB, Zhou GX, Xie SL, et al., 2016. Tensor ring decomposition. https://arxiv.org/abs/1606.05535

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE