Full Text:   <370>

CLC number: 

On-line Access: 2022-02-15

Received: 2021-07-29

Revision Accepted: 2022-01-25

Crosschecked: 0000-00-00

Cited: 0

Clicked: 362

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
Open peer comments

Journal of Zhejiang University SCIENCE C 1998 Vol.-1 No.-1 P.


A novel robotic visual perception framework for underwater operation

Author(s):  Yue LU, Xingyu CHEN, Zhengxing WU, Junzhi YU, Li WEN

Affiliation(s):  State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; more

Corresponding email(s):   junzhi.yu@ia.ac.cn

Key Words:  Underwater operation, Robotic perceptions, Visual restoration, Video object detection

Yue LU, Xingyu CHEN, Zhengxing WU, Junzhi YU, Li WEN. A novel robotic visual perception framework for underwater operation[J]. Frontiers of Information Technology & Electronic Engineering, 1998, -1(-1): .

@article{title="A novel robotic visual perception framework for underwater operation",
author="Yue LU, Xingyu CHEN, Zhengxing WU, Junzhi YU, Li WEN",
journal="Frontiers of Information Technology & Electronic Engineering",
publisher="Zhejiang University Press & Springer",

%0 Journal Article
%T A novel robotic visual perception framework for underwater operation
%A Yue LU
%A Xingyu CHEN
%A Zhengxing WU
%A Junzhi YU
%J Journal of Zhejiang University SCIENCE C
%V -1
%N -1
%@ 2095-9184
%D 1998
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2100366

T1 - A novel robotic visual perception framework for underwater operation
A1 - Yue LU
A1 - Xingyu CHEN
A1 - Zhengxing WU
A1 - Junzhi YU
A1 - Li WEN
J0 - Journal of Zhejiang University Science C
VL - -1
IS - -1
SP -
EP -
%@ 2095-9184
Y1 - 1998
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2100366

Underwater robotic operation usually requires visual perception (e.g., object detection and tracking), but underwater scenes have poor visual quality and represent a special domain which can affect the accuracy of visual perception. In addition, detection continuity and stability are also important for robotic perceptions, but the commonly used static accuracy-based evaluation (i.e., average precision (AP)) is insufficient to reflect detector performance across time. In response to these two problems, we present a design for a novel robotic visual perception framework. First, we generally investigate the relationship of a quality-diverse data domain and visual restoration in detection performance. As a result, although domain quality has an ignorable effect on within-domain detection accuracy, visual restoration is beneficial to detection in real sea scenarios by reducing the domain shift. Moreover, non-reference assessments are proposed for detection continuity and stability based on object tracklets. Further, an online tracklet refinement (OTR) is developed to improve the temporal performance of detectors. Finally, combined with visual restoration, an accurate and stable underwater robotic visual perception framework is established. Smalloverlap suppression (SOS) is proposed to extend video object detection (VID) methods to a single-object tracking task, leading to the flexibility to switch between detection and tracking. Extensive experiments were conducted on the ImageNet VID dataset and real-world robotic tasks to verify the correctness of our analysis and the superiority of our proposed approaches. The codes are available at https://github.com/yrqs/VisPerception.

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Open peer comments: Debate/Discuss/Question/Opinion


Please provide your name, email address and a comment

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2022 Journal of Zhejiang University-SCIENCE