CLC number: TP312
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2014-01-15
Cited: 3
Clicked: 8635
Ye-tian Fan, Wei Wu, Wen-yu Yang, Qin-wei Fan, Jian Wang. A pruning algorithm with L1/2 regularizer for extreme learning machine[J]. Journal of Zhejiang University Science C, 2014, 15(2): 119-125.
@article{title="A pruning algorithm with L1/2 regularizer for extreme learning machine",
author="Ye-tian Fan, Wei Wu, Wen-yu Yang, Qin-wei Fan, Jian Wang",
journal="Journal of Zhejiang University Science C",
volume="15",
number="2",
pages="119-125",
year="2014",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.C1300197"
}
%0 Journal Article
%T A pruning algorithm with L1/2 regularizer for extreme learning machine
%A Ye-tian Fan
%A Wei Wu
%A Wen-yu Yang
%A Qin-wei Fan
%A Jian Wang
%J Journal of Zhejiang University SCIENCE C
%V 15
%N 2
%P 119-125
%@ 1869-1951
%D 2014
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.C1300197
TY - JOUR
T1 - A pruning algorithm with L1/2 regularizer for extreme learning machine
A1 - Ye-tian Fan
A1 - Wei Wu
A1 - Wen-yu Yang
A1 - Qin-wei Fan
A1 - Jian Wang
J0 - Journal of Zhejiang University Science C
VL - 15
IS - 2
SP - 119
EP - 125
%@ 1869-1951
Y1 - 2014
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.C1300197
Abstract: Compared with traditional learning methods such as the back propagation (BP) method, extreme learning machine provides much faster learning speed and needs less human intervention, and thus has been widely used. In this paper we combine the L1/2 regularization method with extreme learning machine to prune extreme learning machine. A variable learning coefficient is employed to prevent too large a learning increment. A numerical experiment demonstrates that a network pruned by L1/2 regularization has fewer hidden nodes but provides better performance than both the original network and the network pruned by L2 regularization.
[1]Bishop, C.M., 1995. Neural Networks for Pattern Recognition. Oxford University Press, UK.
[2]Chen, S., Donoho, D.L., Saunders, M.A., 2001. Atomic decomposition by basis pursuit. SIAM Rev., 43(1):129-159.
[3]Donoho, D.L., Huo, X., 2001. Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inform. Theory, 47(7):2845-2862.
[4]Horata, P., Chiewchanwattana, S., Sunat, K., 2013. Robust extreme learning machine. Neurocomputing, 102:31-44.
[5]Hornik, K., Stinchcombe, M., White, H., 1989. Multilayer feedforward networks are universal approximators. Neur. Networks, 2(5):359-366.
[6]Huang, G.B., Siew, C.K., 2004. Extreme learning machine: RBF network case. Proc. 8th Int. Conf. on Control, Automation, Robotics and Vision, p.1029-1036.
[7]Huang, G.B., Zhu, Q.Y., Siew, C.K., 2004. Extreme learning machine: a new learning scheme of feedforward neural networks. Proc. IEEE Int. Joint Conf. on Neural Networks, p.985-990.
[8]Huang, G.B., Zhu, Q.Y., Siew, C.K., 2006. Extreme learning machine: theory and applications. Neurocomputing, 70(1-3):489-501.
[9]Huang, G.B., Wang, D.H., Lan, Y., 2011. Extreme learning machines: a survey. Int. J. Mach. Learn. Cybern., 2(2):107-122.
[10]Miche, Y., Sorjamaa, A., Bas, P., et al., 2010. OP-ELM: optimally pruned extreme learning machine. IEEE Trans. Neur. Networks, 21(1):158-162.
[11]Miche, Y., van Heeswijk, M., Bas, P., et al., 2011. TROP-ELM: a double-regularized ELM using LARS and Tikhonov regularization. Neurocomputing, 74(16):2413-2421.
[12]Rasmussen, C.E., Williams, C.K.I., 2006. Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA.
[13]Soria-Olivas, E., Gomez-Sanchis, J., Jarman, I.H., et al., 2011. BELM: Bayesian extreme learning machine. IEEE Trans. Neur. Networks, 22(3):505-509.
[14]Tibshirani, R., 1996. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. B, 58(1):267-288.
[15]Wu, W., 2003. Computation of Neural Networks. Higher Education Press, Beijing (in Chinese).
[16]Xu, G.B., Zhou, D.H., 2010. Fault prediction for state-dependent fault based on online learning neural network. J. Zhejiang Univ. (Eng. Sci.), 44(7):1251-1254 (in Chinese).
[17]Xu, Z.B., 2010. Data modeling: visual psychology approach and L1/2 regularization theory. Proc. Int. Congress of Mathmaticians, p.3151-3184.
[18]Xu, Z.B., Zhang, H., Wang, Y., et al., 2010. L1/2 regularization. Sci. China Inform. Sci., 53(6):1159-1169.
[19]Yan, G.F., Zhang, S.L., Liu, M.Q., 2004. Standard neural network model and its application. J. Zhejiang Univ. (Eng. Sci.), 38(3):297-301 (in Chinese).
[20]Zhang, W.B., Ji, H.B., 2013. Fuzzy extreme learning machine for classification. Electron. Lett., 49(7):448-450.
Open peer comments: Debate/Discuss/Question/Opinion
<1>