|
Frontiers of Information Technology & Electronic Engineering
ISSN 2095-9184 (print), ISSN 2095-9230 (online)
2020 Vol.21 No.12 P.1770-1782
SPSSNet: a real-time network for image semantic segmentation
Abstract: Although deep neural networks (DNNs) have achieved great success in semantic segmentation tasks, it is still challenging for real-time applications. A large number of feature channels, parameters, and floating-point operations make the network sluggish and computationally heavy, which is not desirable for real-time tasks such as robotics and autonomous driving. Most approaches, however, usually sacrifice spatial resolution to achieve inference speed in real time, resulting in poor performance. In this paper, we propose a light-weight stage-pooling semantic segmentation network (SPSSN), which can efficiently reuse the paramount features from early layers at multiple stages, at different spatial resolutions. SPSSN takes input of full resolution 2048×1024 pixels, uses only 1.42×106 parameters, yields 69.4% mIoU accuracy without pre-training, and obtains an inference speed of 59 frames/s on the Cityscapes dataset. SPSSN can run directly on mobile devices in real time, due to its light-weight architecture. To demonstrate the effectiveness of the proposed network, we compare our results with those of state-of-the-art networks.
Key words: Real-time semantic segmentation, Stage-pooling, Feature reuse
1南京理工大学计算机科学与工程学院,中国南京市,210094
2创新奇智,中国北京市,100080
摘要:深度神经网络(DNNs)虽已在语义分割领域取得极大成功,但要实现实时推理仍然是一项巨大挑战。大量特征通道、参数与浮点运算极大延缓了网络的推理速度,导致无法满足诸如机器人控制、自动驾驶等实时任务要求。现有大多数方法是通过牺牲空间分辨率来加速推理,往往导致推理结果准确率下降。针对此问题,提出一种新的轻量级阶段池化语义分割网络(SPSSN)。该网络可以保留浅层学习得到的重要特征并在后续层中重复使用。SPSSN以2048×1024的全分辨率图像作为输入,网络模型仅包含1.42×106参数。在无预训练情况下,在Cityscapes数据集上可达到69.4%的mIoU精度,推理速度则可达到每秒59帧。由于SPSSN结构轻巧,它可以在移动设备上实时运行。最后,为验证本文方法有效性,与当前最优网络进行了对比。
关键词组:
References:
Open peer comments: Debate/Discuss/Question/Opinion
<1>
DOI:
10.1631/FITEE.1900697
CLC number:
TP39
Download Full Text:
Downloaded:
3344
Download summary:
<Click Here>Downloaded:
1424Clicked:
5200
Cited:
0
On-line Access:
2024-08-27
Received:
2023-10-17
Revision Accepted:
2024-05-08
Crosschecked:
2020-07-22