CLC number: TP751
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2020-03-04
Cited: 0
Clicked: 4464
Citations: Bibtex RefMan EndNote GB/T7714
Cheng-ming Ye, Xin Liu, Hong Xu, Shi-cong Ren, Yao Li, Jonathan Li. Classification of hyperspectral images based on a convolutional neural network and spectral sensitivity[J]. Journal of Zhejiang University Science A, 2020, 21(3): 240-248.
@article{title="Classification of hyperspectral images based on a convolutional neural network and spectral sensitivity",
author="Cheng-ming Ye, Xin Liu, Hong Xu, Shi-cong Ren, Yao Li, Jonathan Li",
journal="Journal of Zhejiang University Science A",
volume="21",
number="3",
pages="240-248",
year="2020",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.A1900085"
}
%0 Journal Article
%T Classification of hyperspectral images based on a convolutional neural network and spectral sensitivity
%A Cheng-ming Ye
%A Xin Liu
%A Hong Xu
%A Shi-cong Ren
%A Yao Li
%A Jonathan Li
%J Journal of Zhejiang University SCIENCE A
%V 21
%N 3
%P 240-248
%@ 1673-565X
%D 2020
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.A1900085
TY - JOUR
T1 - Classification of hyperspectral images based on a convolutional neural network and spectral sensitivity
A1 - Cheng-ming Ye
A1 - Xin Liu
A1 - Hong Xu
A1 - Shi-cong Ren
A1 - Yao Li
A1 - Jonathan Li
J0 - Journal of Zhejiang University Science A
VL - 21
IS - 3
SP - 240
EP - 248
%@ 1673-565X
Y1 - 2020
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.A1900085
Abstract: In recent years, deep learning methods have gradually come to be used in hyperspectral imaging domains. Because of the peculiarity of hyperspectral imaging, a mass of information is contained in the spectral dimensions of hyperspectral images. Also, different objects on a land surface are sensitive to different ranges of wavelength. To achieve higher accuracy in classification, we propose a structure that combines spectral sensitivity with a convolutional neural network by adding spectral weights derived from predicted outcomes before the final classification layer. First, samples are divided into visible light and infrared, with a portion of the samples fed into networks during training. Then, two key parameters, unrecognized rate (δ) and wrongly recognized rate (γ), are calculated from the predicted outcome of the whole scene. Next, the spectral weight, derived from these two parameters, is calculated. Finally, the spectral weight is added and an improved structure is constructed. The improved structure not only combines the features in spatial and spectral dimensions, but also gives spectral sensitivity a primary status. Compared with inputs from the whole spectrum, the improved structure attains a nearly 2% higher prediction accuracy. When applied to public data sets, compared with the whole spectrum, on the average we achieve approximately 1% higher accuracy.
The paper reports the classification of a HSI data set using CNN. The innovation of this work has been using two different weightings for the visible and NIR bands to enhance class predictions. This is similar to the band selection approach. The idea seems to be plausibly good.
[1]Bengio Y, Courville A, Vincent P, 2013. Representation learning: a review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798-1828.
[2]Bioucas-Dias JM, Plaza A, Camps-Valls G, et al., 2013. Hyperspectral remote sensing data analysis and future challenges. IEEE Geoscience and Remote Sensing Magazine, 1(2):6-36.
[3]Bishop CM, 2006. Pattern Recognition and Machine Learning. Springer, New York, USA.
[4]Cai Y, Yang ML, Li J, 2015. Multiclass classification based on a deep convolutional network for head pose estimation. Frontiers of Information Technology & Electronic Engineering, 16(11):930-939.
[5]Cao JY, Chen Z,Wang B, 2016. Deep convolutional networks with superpixel segmentation for hyperspectral image classification. Proceedings of 2016 IEEE International Geoscience and Remote Sensing Symposium, p.3310-3313.
[6]Chen YS, Zhao X, Jia XP, 2015. Spectral-spatial classification of hyperspectral data based on deep belief network. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 8(6):2381-2392.
[7]Du B, Zhang LP, 2014. A discriminative metric learning based anomaly detection method. IEEE Transactions on Geoscience and Remote Sensing, 52(11):6844-6857.
[8]Glorot X, Bordes A, Bengio Y, 2011. Deep sparse rectifier neural networks. Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, p.315-323.
[9]Gu YF, Chanussot J, Jia XP, et al., 2017. Multiple kernel learning for hyperspectral image classification: a review. IEEE Transactions on Geoscience and Remote Sensing, 55(11):6547-6565.
[10]Hinton GE, Salakhutdinov RR, 2006. Reducing the dimensionality of data with neural networks. Science, 313(5786):504-507.
[11]Hotelling H, 1933. Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology, 24(6):417-441.
[12]Krizhevsky A, Sutskever I, Hinton GE, 2017. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84-90.
[13]LeCun Y, Boser B, Denker JS, et al., 1989. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541-551.
[14]Li J, Bioucas-Dias JM, Plaza A, 2010. Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning. IEEE Transactions on Geoscience and Remote Sensing, 48(11):4085-4098.
[15]Li YS, Xie WY, Li HQ, 2017. Hyperspectral image reconstruction by deep convolutional neural network for classification. Pattern Recognition, 63:371-383.
[16]McCulloch WS, Pitts W, 1943. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics, 5(4):115-133.
[17]Melgani F, Bruzzone L, 2004. Classification of hyperspectral remote sensing images with support vector machines. IEEE Transactions on Geoscience and Remote Sensing, 42(8):1778-1790.
[18]Mou LC, Ghamisi P, Zhu XX, 2017. Deep recurrent neural networks for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 55(7):3639-3655.
[19]Onojeghuo AO, Blackburn GA, Huang JF, et al., 2018. Applications of satellite ‘hyper-sensing’ in Chinese agriculture: challenges and opportunities. International Journal of Applied Earth Observation and Geoinformation, 64:62-86.
[20]Rosenblatt F, 1958. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review, 65(6):386-408.
[21]Rumelhart DE, Hinton GE, Williams RJ, 1988. Learning internal representations by error propagation. In: Collins A, Smith EE (Eds.), Readings in Cognitive Science. Morgan Kaufmann, San Mateo, USA.
[22]Sigurdsson J, Ulfarsson MO, Sveinsson JR, 2014. Semisupervised hyperspectral unmixing. Proceedings of 2014 IEEE Geoscience and Remote Sensing Symposium, p.3458-3461.
[23]Szegedy C, Vanhoucke V, Ioffe S, et al., 2016. Rethinking the inception architecture for computer vision. Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition, p.2818-2826.
[24]Wang LD, Zhou W, Xing Y, et al., 2019. A novel method based on convolutional neural networks for deriving standard 12-lead ECG from serial 3-lead ECG. Frontiers of Information Technology & Electronic Engineering, 20(3):405-413.
[25]Wentz EA, Anderson S, Fragkias M, et al., 2014. Supporting global environmental change research: a review of trends and knowledge gaps in urban remote sensing. Remote Sensing, 6(5):3879-3905.
[26]Yang XH, Huang JF, Wang FM, et al., 2006. A modified chlorophyll absorption continuum index for chlorophyll estimation. Journal of Zhejiang University-SCIENCE A, 7(12):2002-2006.
[27]Ye CM, Cui P, Pirasteh S, et al., 2017. Experimental approach for identifying building surface materials based on hyperspectral remote sensing imagery. Journal of Zhejiang University-SCIENCE A (Applied Physics & Engineering), 18(12):984-990.
[28]Zhang F, Du B, Zhang LP, et al., 2016. Hierarchical feature learning with dropout k-means for hyperspectral image classification. Neurocomputing, 187:75-82.
[29]Zhao WZ, Du SH, 2016. Learning multiscale and deep representations for classifying remotely sensed imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 113:155-165.
[30]Zhong P, Gong ZQ, Li ST, et al., 2017. Learning to diversify deep belief networks for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing, 55(6):3516-3530.
[31]Zhong ZL, Li J, Luo ZM, et al., 2018. Spectral-spatial residual network for hyperspectral image classification: a 3-D deep learning framework. IEEE Transactions on Geoscience and Remote Sensing, 56(2):847-858.
Open peer comments: Debate/Discuss/Question/Opinion
<1>