CLC number: TN79
On-line Access: 2010-08-02
Received: 2009-08-14
Revision Accepted: 2009-12-04
Crosschecked: 2010-05-04
Cited: 6
Clicked: 8078
Kui-kang Cao, Hai-bin Shen, Hua-feng Chen. A parallel and scalable digital architecture for training support vector machines[J]. Journal of Zhejiang University Science C, 2010, 11(8): 620-628.
@article{title="A parallel and scalable digital architecture for training support vector machines",
author="Kui-kang Cao, Hai-bin Shen, Hua-feng Chen",
journal="Journal of Zhejiang University Science C",
volume="11",
number="8",
pages="620-628",
year="2010",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.C0910500"
}
%0 Journal Article
%T A parallel and scalable digital architecture for training support vector machines
%A Kui-kang Cao
%A Hai-bin Shen
%A Hua-feng Chen
%J Journal of Zhejiang University SCIENCE C
%V 11
%N 8
%P 620-628
%@ 1869-1951
%D 2010
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.C0910500
TY - JOUR
T1 - A parallel and scalable digital architecture for training support vector machines
A1 - Kui-kang Cao
A1 - Hai-bin Shen
A1 - Hua-feng Chen
J0 - Journal of Zhejiang University Science C
VL - 11
IS - 8
SP - 620
EP - 628
%@ 1869-1951
Y1 - 2010
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.C0910500
Abstract: To facilitate the application of support vector machines (SVMs) in embedded systems, we propose and test a parallel and scalable digital architecture based on the sequential minimal optimization (SMO) algorithm for training SVMs. By taking advantage of the mature and popular SMO algorithm, the numerical instability issues that may exist in traditional numerical algorithms are avoided. The error cache updating task, which dominates the computation time of the algorithm, is mapped into multiple processing units working in parallel. Experiment results show that using the proposed architecture, SVM training problems can be solved effectively with inexpensive fixed-point arithmetic and good scalability can be achieved. This architecture overcomes the drawbacks of the previously proposed SVM hardware that lacks the necessary flexibility for embedded applications, and thus is more suitable for embedded use, where scalability is an important concern.
[1]Anguita, D., Boni, A., Ridella, S., 2003. A digital architecture for support vector machines: theory, algorithm, and FPGA implementation. IEEE Trans. Neur. Networks, 14(5):993-1009.
[2]Anguita, D., Pischiutta, S., Ridella, S., Sterpi, D., 2006. Feed-forward support vector machine without multipliers. IEEE Trans. Neur. Networks, 17(5):1328-1331.
[3]Biasi, I., Boni, A., Zorat, A., 2005. A Reconfigurable Parallel Architecture for SVM Classification. Proc. IEEE Int. Joint Conf. on Neural Networks, p.2867-2872.
[4]Burges, C.J.C., 1998. A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Discov., 2(2):121-167.
[5]Catanzaro, B., Sundaram, N., Keutzer, K., 2008. Fast Support Vector Machine Training and Classification on Graphics Processors. Proc. 25th Int. Conf. on Machine Learning, p.104-111.
[6]Chen, S., Gibson, G.J., Cowan, C.F.N., Grant, P.M., 1990. Adaptive equalization of finite nonlinear channels using multilayer perceptrons. EURASIP Signal Process., 20(2):107-119.
[7]Choi, W.Y., Ahn, D., Pan, S.B., Chung, K.I., Chung, Y.W., Chung, S.H., 2006. SVM-based speaker verification system for match-on-card and its hardware implementation. ETRI J., 28(3):320-328.
[8]Frieß, T.T., Cristianini, N., Campbell, C., 1998. The Kernel-Adatron Algorithm: A Fast and Simple Learning Procedure for Support Vector Machines. Proc. 15th Int. Conf. on Machine Learning, p.188-196.
[9]Graf, H.P., Cadambi, S., Durdanovic, I., Jakkula, V., Sankaradass, M., Cosatto, E., Chakradhar, S.T., 2008. A Massively Parallel Digital Learning Processor. 22nd Annual Conf. on Neural Information Processing Systems, p.529-536.
[10]Keerthi, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K., 2001. Improvements to Platt’s SMO algorithm for SVM classifier design. Neur. Comput., 13(3):637-649.
[11]Manikandan, J., Venkataramani, B., Avanthi, V., 2009. FPGA Implementation of Support Vector Machine Based Isolated Digit Recognition System. Proc. 22nd Int. Conf. on VLSI Design, p.347-352.
[12]Platt, J.C., 1999. Fast Training of Support Vector Machines Using Sequential Minimal Optimization. In: Schölkopf, B., Burges, C., Smola, A. (Eds.), Advances in Kernel Methods: Support Vector Learning. MIT Press, Cambridge, MA, p.185-208.
[13]Schölkopf, B., Burges, C.J.C., Smola, A.J., 1999. Advances in Kernel Methods: Support Vector Learning. MIT Press, Cambridge, MA, p.1-16.
[14]Sebald, D.J., Bucklew, J.A., 2000. Support vector machine techniques for nonlinear equalization. IEEE Trans. Signal Process., 48(11):3217-3226.
[15]Sun, Z., Zhang, L., Tang, E., 2005. An incremental learning method based on SVM for online sketchy shape recognition. LNCS, 3610:655-659.
[16]Vapnik, V.N., 1998. Statistical Learning Theory. Wiley, New York, p.493-520.
[17]Wee, J.W., Lee, C.H., 2004. Concurrent support vector machine processor for disease diagnosis. LNCS, 3316:1129-1134.
Open peer comments: Debate/Discuss/Question/Opinion
<1>
zhaizy
2010-08-02 17:08:55
Reviewer: This paper presents an FPGA implementation of the SMO algorithm for Support Vector Machine training. The paper is well written and easy to read. The results are also good. The proposed architecture seems sensible. Overall, I like the paper and think it represents solid work.
--Editor