CLC number: TP391
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 0000-00-00
Cited: 4
Clicked: 7165
YANG Sheng, GU Jun. Feature selection based on mutual information and redundancy-synergy coefficient[J]. Journal of Zhejiang University Science A, 2004, 5(11): 1382-1391.
@article{title="Feature selection based on mutual information and redundancy-synergy coefficient",
author="YANG Sheng, GU Jun",
journal="Journal of Zhejiang University Science A",
volume="5",
number="11",
pages="1382-1391",
year="2004",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.2004.1382"
}
%0 Journal Article
%T Feature selection based on mutual information and redundancy-synergy coefficient
%A YANG Sheng
%A GU Jun
%J Journal of Zhejiang University SCIENCE A
%V 5
%N 11
%P 1382-1391
%@ 1869-1951
%D 2004
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.2004.1382
TY - JOUR
T1 - Feature selection based on mutual information and redundancy-synergy coefficient
A1 - YANG Sheng
A1 - GU Jun
J0 - Journal of Zhejiang University Science A
VL - 5
IS - 11
SP - 1382
EP - 1391
%@ 1869-1951
Y1 - 2004
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.2004.1382
Abstract: mutual information is an important information measure for feature subset. In this paper, a hashing mechanism is proposed to calculate the mutual information on the feature subset. Redundancy-synergy coefficient, a novel redundancy and synergy measure of features to express the class feature, is defined by mutual information. The information maximization rule was applied to derive the heuristic feature subset selection method based on mutual information and redundancy-synergy coefficient. Our experiment results showed the good performance of the new feature selection method.
[1] Almuallim, H., Dietterich, T.G., 1991. Learning with Many Irrelevant Features. Proceedings of the Ninth National Conference on Artificial Intelligence (AAAI-91), Anaheim, California, p.547-552.
[2] Blum, A.L., Rivest, R.L., 1992. Training a 3-node neural network is NP-complete. Neural Networks, 5:117-127.
[3] Brenner, N., Strong, S.P., Koberle, R., Bialek, W., De Ruyter van Steveninck, R., 2000. Synergy in a neural code. Neural Computation, 13(7):1531-1552.
[4] Chen, B., Hong, J.R., Wang, Y.D., 1997. Minimum feature subset selection problem. Journal of Computer Science and Technology, 12:145-153.
[5] Cover, T.M., 1991. Elements of Information Theory. Wiley, New York.
[6] Fano, R., 1961. Tranmission of Information: A Statistical Theory of Communications. Wiley, New York.
[7] Liu, H., Motoda, H., 1998. Feature Selection for Knowledge Discovery and Data Mining. Kluwer Acadcemic Press, Boston.
[8] Liu, H., Motoda, H., Dash, M., 1998. A Monotonic Measure for Optimal Feature Selection. Proceedings of ECML-98, p.101-106.
[9] Liu, H., Setiono, R., 1996. A Probabilistic Approach to Feature Selection – A Filter Solution. In: ICML-96. Morgan Kaufmann Publishers, p.319-327.
[10] Murphy, P.M., Pazzani, M.J., 1994. Exploring the decision forest: An empirical investigation of Occam’s razor in decision tree induction. Journal of Art. Intel., 1:257-319.
[11] Narendra, P., Fukunaga, K., 1977. A branch and bound method for feature subset selection. IEEE Trans. on Computer, 26 (9):917-922.
[12] Yaglom, A.M., Yaglom, I.M., 1983. Probability and Information. D. Reidel Publishing Company.
Open peer comments: Debate/Discuss/Question/Opinion
<1>