Full Text:   <3779>

Summary:  <2390>

CLC number: TP312

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2014-01-15

Cited: 3

Clicked: 8673

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
Open peer comments

Journal of Zhejiang University SCIENCE C 2014 Vol.15 No.2 P.119-125

http://doi.org/10.1631/jzus.C1300197


A pruning algorithm with L1/2 regularizer for extreme learning machine


Author(s):  Ye-tian Fan, Wei Wu, Wen-yu Yang, Qin-wei Fan, Jian Wang

Affiliation(s):  School of Mathematical Sciences, Dalian University of Technology, Dalian 116023, China; more

Corresponding email(s):   fanyetian@mail.dlut.edu.cn, wuweiw@dlut.edu.cn

Key Words:  Extreme learning machine (ELM), L1/2 regularizer, Network pruning


Ye-tian Fan, Wei Wu, Wen-yu Yang, Qin-wei Fan, Jian Wang. A pruning algorithm with L1/2 regularizer for extreme learning machine[J]. Journal of Zhejiang University Science C, 2014, 15(2): 119-125.

@article{title="A pruning algorithm with L1/2 regularizer for extreme learning machine",
author="Ye-tian Fan, Wei Wu, Wen-yu Yang, Qin-wei Fan, Jian Wang",
journal="Journal of Zhejiang University Science C",
volume="15",
number="2",
pages="119-125",
year="2014",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.C1300197"
}

%0 Journal Article
%T A pruning algorithm with L1/2 regularizer for extreme learning machine
%A Ye-tian Fan
%A Wei Wu
%A Wen-yu Yang
%A Qin-wei Fan
%A Jian Wang
%J Journal of Zhejiang University SCIENCE C
%V 15
%N 2
%P 119-125
%@ 1869-1951
%D 2014
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.C1300197

TY - JOUR
T1 - A pruning algorithm with L1/2 regularizer for extreme learning machine
A1 - Ye-tian Fan
A1 - Wei Wu
A1 - Wen-yu Yang
A1 - Qin-wei Fan
A1 - Jian Wang
J0 - Journal of Zhejiang University Science C
VL - 15
IS - 2
SP - 119
EP - 125
%@ 1869-1951
Y1 - 2014
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.C1300197


Abstract: 
Compared with traditional learning methods such as the back propagation (BP) method, extreme learning machine provides much faster learning speed and needs less human intervention, and thus has been widely used. In this paper we combine the L1/2 regularization method with extreme learning machine to prune extreme learning machine. A variable learning coefficient is employed to prevent too large a learning increment. A numerical experiment demonstrates that a network pruned by L1/2 regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2 regularization.

利用L1/2正则化进行极端学习机的网络修剪算法

研究背景:1. 神经网络有着广泛的应用,但收敛速度慢、精度低,影响了它的发展。相较于传统的神经网络,极端学习机克服了这些缺点,它不仅提供更快的学习速度,而且只需较少的人工干预,这些优点使得极端学习机得到了广泛应用。2. 相比于L1和L2正则化,L1/2正则化的解具有更好的稀疏性;而与L0正则化相比,它又更容易求解。
创新要点:将L1/2正则化方法与极端学习机结合,利用L1/2正则化较好的稀疏性,修剪极端学习机的网络结构。
方法提亮:极小化的目标函数中含有L1/2范数,当权值变得较小时,其导数值会较大。为了阻止权值过快增长,提出一个可变学习率。
重要结论:数据实验表明,相比于原始的极端学习机算法和带L2正则化的极端学习机算法,带L1/2正则化的极端学习机算法不仅拥有较少隐节点,并且拥有更好泛化能力。

关键词:极端学习机,L1/2正则化,网络修剪

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Bishop, C.M., 1995. Neural Networks for Pattern Recognition. Oxford University Press, UK.

[2]Chen, S., Donoho, D.L., Saunders, M.A., 2001. Atomic decomposition by basis pursuit. SIAM Rev., 43(1):129-159.

[3]Donoho, D.L., Huo, X., 2001. Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inform. Theory, 47(7):2845-2862.

[4]Horata, P., Chiewchanwattana, S., Sunat, K., 2013. Robust extreme learning machine. Neurocomputing, 102:31-44.

[5]Hornik, K., Stinchcombe, M., White, H., 1989. Multilayer feedforward networks are universal approximators. Neur. Networks, 2(5):359-366.

[6]Huang, G.B., Siew, C.K., 2004. Extreme learning machine: RBF network case. Proc. 8th Int. Conf. on Control, Automation, Robotics and Vision, p.1029-1036.

[7]Huang, G.B., Zhu, Q.Y., Siew, C.K., 2004. Extreme learning machine: a new learning scheme of feedforward neural networks. Proc. IEEE Int. Joint Conf. on Neural Networks, p.985-990.

[8]Huang, G.B., Zhu, Q.Y., Siew, C.K., 2006. Extreme learning machine: theory and applications. Neurocomputing, 70(1-3):489-501.

[9]Huang, G.B., Wang, D.H., Lan, Y., 2011. Extreme learning machines: a survey. Int. J. Mach. Learn. Cybern., 2(2):107-122.

[10]Miche, Y., Sorjamaa, A., Bas, P., et al., 2010. OP-ELM: optimally pruned extreme learning machine. IEEE Trans. Neur. Networks, 21(1):158-162.

[11]Miche, Y., van Heeswijk, M., Bas, P., et al., 2011. TROP-ELM: a double-regularized ELM using LARS and Tikhonov regularization. Neurocomputing, 74(16):2413-2421.

[12]Rasmussen, C.E., Williams, C.K.I., 2006. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA.

[13]Soria-Olivas, E., Gomez-Sanchis, J., Jarman, I.H., et al., 2011. BELM: Bayesian extreme learning machine. IEEE Trans. Neur. Networks, 22(3):505-509.

[14]Tibshirani, R., 1996. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. B, 58(1):267-288.

[15]Wu, W., 2003. Computation of Neural Networks. Higher Education Press, Beijing (in Chinese).

[16]Xu, G.B., Zhou, D.H., 2010. Fault prediction for state-dependent fault based on online learning neural network. J. Zhejiang Univ. (Eng. Sci.), 44(7):1251-1254 (in Chinese).

[17]Xu, Z.B., 2010. Data modeling: visual psychology approach and L1/2 regularization theory. Proc. Int. Congress of Mathmaticians, p.3151-3184.

[18]Xu, Z.B., Zhang, H., Wang, Y., et al., 2010. L1/2 regularization. Sci. China Inform. Sci., 53(6):1159-1169.

[19]Yan, G.F., Zhang, S.L., Liu, M.Q., 2004. Standard neural network model and its application. J. Zhejiang Univ. (Eng. Sci.), 38(3):297-301 (in Chinese).

[20]Zhang, W.B., Ji, H.B., 2013. Fuzzy extreme learning machine for classification. Electron. Lett., 49(7):448-450.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE