CLC number: R443+.8; TP391.4
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2019-10-08
Cited: 0
Clicked: 3396
Yan-yi Zhang, Di Xie. Detection and segmentation of multi-class artifacts in endoscopy[J]. Journal of Zhejiang University Science B, 2019, 20(12): 1014-1020.
@article{title="Detection and segmentation of multi-class artifacts in endoscopy",
author="Yan-yi Zhang, Di Xie",
journal="Journal of Zhejiang University Science B",
volume="20",
number="12",
pages="1014-1020",
year="2019",
publisher="Zhejiang University Press & Springer",
doi="10.1631/jzus.B1900340"
}
%0 Journal Article
%T Detection and segmentation of multi-class artifacts in endoscopy
%A Yan-yi Zhang
%A Di Xie
%J Journal of Zhejiang University SCIENCE B
%V 20
%N 12
%P 1014-1020
%@ 1673-1581
%D 2019
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.B1900340
TY - JOUR
T1 - Detection and segmentation of multi-class artifacts in endoscopy
A1 - Yan-yi Zhang
A1 - Di Xie
J0 - Journal of Zhejiang University Science B
VL - 20
IS - 12
SP - 1014
EP - 1020
%@ 1673-1581
Y1 - 2019
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.B1900340
Abstract: Endoscopy may be used for early screening of various cancers, such as nasopharyngeal cancer, esophageal adenocarcinoma, gastric cancer, colorectal cancer, and bladder cancer, and performing minimal invasive surgical procedures, such as laparoscopy surgery. During this procedure, an endoscope is used; it is a long, thin, rigid, or flexible tube having a light source and a camera at the tip, which facilitates visualization inside the affected organs on a screen and helps doctors in diagnosis.
[1]Ali S, Zhou F, Daul C, et al., 2019. Endoscopy artifact detection (EAD 2019) challenge dataset. arXiv preprint, arXiv: 1905.03209.
[2]Allan M, Shvets A, Kurmann T, et al., 2019. 2017 Robotic instrument segmentation challenge. arXiv preprint, arXiv: 1902.06426.
[3]Bouget D, Benenson R, Omran M, et al., 2015. Detecting surgical tools by modelling local appearance and global shape. IEEE Transact Med Imag, 34(12):2603-2617.
[4]Bouget D, Allan M, Stoyanov D, et al., 2017. Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med Image Anal, 35:633-654.
[5]Cai ZW, Vasconcelos N, 2018. Cascade R-CNN: delving into high quality object detection. IEEE/CVF Conference on Computer Vision and Pattern Recognition, p.6154-6162.
[6]Chen LC, Zhu YK, Papandreou G, et al., 2018. Encoder-decoder with atrous separable convolution for semantic image segmentation. Proceedings of the European Conference on Computer Vision, p.833-851.
[7]Kalalembang E, Usman K, Gunawan IP, 2009. DCT-based local motion blur detection. International Conference on Instrumentation, Communication, Information Technology, and Biomedical Engineering, p.1-6.
[8]Leibetseder A, Primus MJ, Petscharnig S, et al., 2017. Real-time image-based smoke detection in endoscopic videos. Proceedings of the on Thematic Workshops of ACM Multimedia, p.296-304.
[9]Lin TY, Maire M, Belongie S, et al., 2014. Microsoft COCO: common objects in context. In: Fleet D, Pajdla T, Schiele B, et al. (Eds.), Computer Vision—ECCV 2014. Springer, p.740-755.
[10]Loukas C, Georgiou E, 2015. Smoke detection in endoscopic surgery videos: a first step towards retrieval of semantic events. Int J Med Robot Comput Assist Surg, 11(1):80-94.
[11]Münzer B, Schoeffmann K, Böszörmenyi L, 2013. Detection of circular content area in endoscopic videos. Proceedings of the 26th IEEE International Symposium on Computer-Based Medical Systems, p.534-536.
[12]Münzer B, Schoeffmann K, Böszörmenyi L, 2018. Content-based processing and analysis of endoscopic images and videos: a survey. Multimed Tools Appl, 77(1):1323-1362.
[13]Read J, Pfahringer B, Holmes G, et al., 2011. Classifier chains for multi-label classification. Mach Learn, 85(3):333-359.
[14]Ren SQ, He KM, Girshick R, et al., 2015. Faster R-CNN: towards real-time object detection with region proposal networks. Advances in Neural Information Processing Systems, p.91-99.
[15]Ronneberger O, Fischer P, Brox T, 2015. U-Net: convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-assisted Intervention, p.234-241.
[16]Shvets AA, Rakhlin A, Kalinin AA, et al., 2018. Automatic instrument segmentation in robot-assisted surgery using deep learning. 17th IEEE International Conference on Machine Learning and Applications, p.624-628.
[17]Zhao HS, Shi JP, Qi XJ, et al., 2017. Pyramid scene parsing network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, p.6230-6239.
Open peer comments: Debate/Discuss/Question/Opinion
<1>