Full Text:   <967>

Summary:  <221>

CLC number: TN911.73; TP391.41

On-line Access: 2023-07-03

Received: 2022-10-03

Revision Accepted: 2023-02-02

Crosschecked: 2023-07-03

Cited: 0

Clicked: 1208

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Wei-shi Zhang

https://orcid.org/0000-0003-0519-8397

Fei WANG

https://orcid.org/0000-0002-3973-6037

Jingchun ZHOU

https://orcid.org/0000-0002-4111-6240.

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2023 Vol.24 No.6 P.828-843

http://doi.org/10.1631/FITEE.2200429


Underwater object detection by fusing features from different representations of sonar data


Author(s):  Fei WANG, Wanyu LI, Miao LIU, Jingchun ZHOU, Weishi ZHANG

Affiliation(s):  College of Information Science and Technology, Dalian Maritime University, Dalian 116026, China; more

Corresponding email(s):   feiwang@dlmu.edu.cn, zhoujingchun@dlmu.edu.cn, teesiv@dlmu.edu.cn

Key Words:  Underwater object detection, Sonar data representation, Feature fusion


Fei WANG, Wanyu LI, Miao LIU, Jingchun ZHOU, Weishi ZHANG. Underwater object detection by fusing features from different representations of sonar data[J]. Frontiers of Information Technology & Electronic Engineering, 2023, 24(6): 828-843.

@article{title="Underwater object detection by fusing features from different representations of sonar data",
author="Fei WANG, Wanyu LI, Miao LIU, Jingchun ZHOU, Weishi ZHANG",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="24",
number="6",
pages="828-843",
year="2023",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2200429"
}

%0 Journal Article
%T Underwater object detection by fusing features from different representations of sonar data
%A Fei WANG
%A Wanyu LI
%A Miao LIU
%A Jingchun ZHOU
%A Weishi ZHANG
%J Frontiers of Information Technology & Electronic Engineering
%V 24
%N 6
%P 828-843
%@ 2095-9184
%D 2023
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2200429

TY - JOUR
T1 - Underwater object detection by fusing features from different representations of sonar data
A1 - Fei WANG
A1 - Wanyu LI
A1 - Miao LIU
A1 - Jingchun ZHOU
A1 - Weishi ZHANG
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 24
IS - 6
SP - 828
EP - 843
%@ 2095-9184
Y1 - 2023
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2200429


Abstract: 
Modern underwater object detection methods recognize objects from sonar data based on their geometric shapes. However, the distortion of objects during data acquisition and representation is seldom considered. In this paper, we present a detailed summary of representations for sonar data and a concrete analysis of the geometric characteristics of different data representations. Based on this, a feature fusion framework is proposed to fully use the intensity features extracted from the polar image representation and the geometric features learned from the point cloud representation of sonar data. Three feature fusion strategies are presented to investigate the impact of feature fusion on different components of the detection pipeline. In addition, the fusion strategies can be easily integrated into other detectors, such as the You Only Look Once (YOLO) series. The effectiveness of our proposed framework and feature fusion strategies is demonstrated on a public sonar dataset captured in real-world underwater environments. Experimental results show that our method benefits both the region proposal and the object classification modules in the detectors.

基于多表征声呐数据特征融合的水下目标检测方法

王非1,李婉玉1,刘淼2,周景春1,张维石1
1大连海事大学信息科学技术学院,中国大连市,116026
2大连海事大学交通运输工程学院,中国大连市,116026
摘要:现有水下目标检测方法多基于物体的几何形状从声呐数据中识别物体,这些方法几乎忽略数据采集和数据表征过程所产生的形状畸变问题。为此,本文对声呐数据的不同表示形式进行了对比分析,在此基础上,提出了一个特征融合框架,以充分利用从极坐标图像中提取的强度特征和从点云表示形式中学习的几何特征。该框架中设计了三种特征融合策略,以分析特征融合对检测器不同模块的影响。同时,这些融合策略可以直接集成到其他检测器中,如YOLO系列。通过公开水下实景声呐数据集上的一系列对比实验,验证了所提框架和特征融合策略的有效性。实验结果表明,所提特征融合方法对检测器中候选区域模块和分类模块的结果都有所增益。

关键词:水下目标检测;声呐数据表示形式;特征融合

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Ben Tamou A, Benzinou A, Nasreddine K, 2021. Multi-stream fish detection in unconstrained underwater videos by the fusion of two convolutional neural network detectors. Appl Intell, 51(8):5809-5821.

[2]Bochkovskiy A, Wang CY, Liao HYM, 2020. YOLOv4: optimal speed and accuracy of object detection. https://arxiv.org/abs/2004.10934

[3]Charles RQ, Su H, Mo KC, et al., 2017. PointNet: deep learning on point sets for 3D classification and segmentation. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.77-85.

[4]Chen K, Wang JQ, Pang JM, et al., 2019. MMDetection: open MMLab detection toolbox and benchmark. https://arxiv.org/abs/1906.07155

[5]Chen XL, Mu XQ, Guan J, et al., 2022. Marine target detection based on Marine-Faster R-CNN for navigation radar plane position indicator images. Front Inform Technol Electron Eng, 23(4):630-643.

[6]Chen ZH, Yang CHY, Li QF, et al., 2021. Disentangle your dense object detector. Proc 29th ACM Int Conf on Multimedia, p.4939-4948.

[7]Feng CJ, Zhong YJ, Gao Y, et al., 2021. TOOD: task-aligned one-stage object detection. Proc IEEE/CVF Int Conf on Computer Vision, p.3490-3499.

[8]Ge Z, Liu ST, Wang F, et al., 2021. YOLOX: exceeding YOLO series in 2021.

[9]Ghiasi G, Lin TY, Le QV, 2019. NAS-FPN: learning scalable feature pyramid architecture for object detection. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.7029-7038.

[10]Girshick R, 2015. Fast R-CNN. Proc IEEE Int Conf on Computer Vision, p.1440-1448.

[11]Girshick R, Donahue J, Darrell T, et al., 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.580-587.

[12]Girshick R, Donahue J, Darrell T, et al., 2016. Region-based convolutional networks for accurate object detection and segmentation. IEEE Trans Patt Anal Mach Intell, 38(1):142-158.

[13]Huang H, Zhou H, Yang X, et al., 2019. Faster R-CNN for marine organisms detection and recognition using data augmentation. Neurocomputing, 337:372-384.

[14]Kim J, Yu SC, 2016. Convolutional neural network-based real-time ROV detection using forward-looking sonar image. Proc IEEE/OES Autonomous Underwater Vehicles, p.396-400.

[15]Kong WZ, Hong JC, Jia MY, et al., 2020. YOLOv3-DPFIN: a dual-path feature fusion neural network for robust real-time sonar target detection. IEEE Sens J, 20(7):‍3745-3756.

[16]Lin TY, Dollár P, Girshick R, et al., 2017. Feature pyramid networks for object detection. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.936-944.

[17]Liu D, Cheng F, 2021. SRM-FPN: a small target detection method based on FPN optimized feature. Proc 18th Int Computer Conf on Wavelet Active Media Technology and Information Processing, p.506-509.

[18]Otsu N, 1979. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern, 9(1):62-66.

[19]Pu SL, Zhao W, Chen WJ, et al., 2021. Unsupervised object detection with scene-adaptive concept learning. Front Inform Technol Electron Eng, 22(5):638-651.

[20]Redmon J, Farhadi A, 2018. YOLOv3: an incremental improvement.

[21]Redmon J, Divvala S, Girshick R, et al., 2016. You Only Look Once: unified, real-time object detection. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.779-788.

[22]Ren SQ, He KM, Girshick R, et al., 2015. Faster R-CNN: towards real-time object detection with region proposal networks. Proc 28th Int Conf on Neural Information Processing Systems, p.91-99.

[23]Song Y, He B, Liu P, 2021. Real-time object detection for AUVs using self-cascaded convolutional neural networks. IEEE J Oceanic Eng, 46(1):56-67.

[24]Sun PZ, Zhang RF, Jiang Y, et al., 2021. Sparse R-CNN: end-to-end object detection with learnable proposals. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.14449-14458.

[25]Tian MJ, Li XL, Kong SH, et al., 2022. A modified YOLOv4 detection method for a vision-based underwater garbage cleaning robot. Front Inform Technol Electron Eng, 23(8):‍1217-1228.

[26]Valdenegro-Toro M, 2016. Object recognition in forward-looking sonar images with convolutional neural networks. Proc OCEANS MTS/IEEE Monterey, p.1-6.

[27]Wang Z, Guo JX, Huang WZ, et al., 2022. Side-scan sonar image segmentation based on multi-channel fusion convolution neural networks. IEEE Sens J, 22(6):5911-5928.

[28]Yang GY, Wang ZY, Zhuang SN, 2021. PFF-FPN: a parallel feature fusion module based on FPN in pedestrian detection. Proc Int Conf on Computer Engineering and Artificial Intelligence, p.377-381.

[29]Zhang HK, Chang H, Ma BP, et al., 2020. Dynamic R-CNN: towards high quality object detection via dynamic training. Proc 16th European Conf on Computer Vision, p.260-275.

[30]Zhou JC, Zhang DH, Ren WQ, et al., 2022a. Auto color correction of underwater images utilizing depth information. IEEE Geosci Remote Sens Lett, 19:1504805.

[31]Zhou JC, Yang TY, Chu WS, et al., 2022b. Underwater image restoration via backscatter pixel prior and color compensation. Eng Appl Artif Intell, 111:104785.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE