
CLC number: TP391.41
On-line Access: 2026-01-08
Received: 2025-05-11
Revision Accepted: 2025-09-03
Crosschecked: 2026-01-08
Cited: 0
Clicked: 979
Citations: Bibtex RefMan EndNote GB/T7714
Haoxiang ZHU, Houjin CHEN, Yanfeng LI, Jia SUN, Ziwei CHEN, Jiaxin LI. Parallel prototype filter and feature refinement for few-shot medical image segmentation[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(11): 2143-2158.
@article{title="Parallel prototype filter and feature refinement for few-shot medical image segmentation",
author="Haoxiang ZHU, Houjin CHEN, Yanfeng LI, Jia SUN, Ziwei CHEN, Jiaxin LI",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="11",
pages="2143-2158",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2500304"
}
%0 Journal Article
%T Parallel prototype filter and feature refinement for few-shot medical image segmentation
%A Haoxiang ZHU
%A Houjin CHEN
%A Yanfeng LI
%A Jia SUN
%A Ziwei CHEN
%A Jiaxin LI
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 11
%P 2143-2158
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2500304
TY - JOUR
T1 - Parallel prototype filter and feature refinement for few-shot medical image segmentation
A1 - Haoxiang ZHU
A1 - Houjin CHEN
A1 - Yanfeng LI
A1 - Jia SUN
A1 - Ziwei CHEN
A1 - Jiaxin LI
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 11
SP - 2143
EP - 2158
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2500304
Abstract: medical image segmentation is critical for clinical diagnosis, but the scarcity of annotated data limits robust model training, making few-shot learning indispensable. Existing methods often suffer from two issues—performance degradation due to significant inter-class variations in pathological structures, and overreliance on attention mechanisms with high computational complexity (O(n²)), which hinders the efficient modeling of long-range dependencies. In contrast, the state space model (SSM) offers linear complexity (O(n)) and superior efficiency, making it a key solution. To address these challenges, we propose PPFFR (parallel prototype filter and feature refinement) for few-shot medical image segmentation. The proposed framework comprises three key modules. First, we propose the prototype refinement (PR) module to construct refined class subgraphs from encoder-extracted features of both support and query images, which generates support prototypes with minimized inter-class variation. We then propose the parallel prototype filter (PPF) module to suppress background interference and enhance the correlation between support and query prototypes. Finally, we implement the feature refinement (FR) module to further enhance segmentation accuracy and accelerate model convergence with SSM’s robust long-range dependency modeling capability, integrated with multi-head attention (MHA) to preserve spatial details. Experimental results on the Abd-MRI dataset demonstrate that FR with MHA outperforms FR alone in segmenting the left kidney, right kidney, liver, and spleen, and in terms of mean accuracy, confirming MHA’s role in improving precision. In extensive experiments conducted on three public datasets under the 1-way 1-shot setting, PPFFR achieves Dice scores of 87.62%, 86.74%, and 79.71% separately, consistently surpassing state-of-the-art few-shot medical image segmentation methods. As the critical component, SSM ensures that PPFFR balances performance with efficiency. Ablation studies validate the effectiveness of the PR, PPF, and FR modules. The results indicate that explicit inter-class variation reduction and SSM-based feature refinement can enhance accuracy without heavy computational overhead. In conclusion, PPFFR effectively enhances inter-class consistency and computational efficiency for few-shot medical image segmentation. This work provides insights for few-shot learning in medical imaging and inspires lightweight architecture designs for clinical deployment.
[1]Aurenhammer F, 1991. Voronoi diagrams—a survey of a fundamental geometric data structure. ACM Comput Surv, 23(3):345-405.
[2]Awudong B, Li Q, Liang ZL, et al., 2024. Attentional adversarial training for few-shot medical image segmentation without annotations. PLoS ONE, 19(5):e0298227.
[3]Cheng YR, Zheng YJ, Wang JX, 2025. CFNet: automatic multi-modal brain tumor segmentation through hierarchical coarse-to-fine fusion and feature communication. Biomed Signal Process Contr, 99:106876.
[4]Cheng ZM, Wang SD, Xin T, et al., 2024. Few-shot medical image segmentation via generating multiple representative descriptors. IEEE Trans Med Imag, 43(6):2202-2214.
[5]Dong NQ, Xing EP, 2018. Few-shot semantic segmentation with prototype learning. Proc British Machine Vision Conf, p.1-13.
[6]Fraz MM, Remagnino P, Hoppe A, et al., 2012. An ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Trans Biomed Eng, 59(9):2538-2548.
[7]Greenwald NF, Miller G, Moen E, et al., 2021. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning. Nat Biotechnol, 40(4):555-565.
[8]Gu A, Dao T, 2023. Mamba: linear-time sequence modeling with selective state spaces. https://arxiv.org/abs/2312.00752
[9]Guha Roy A, Siddiqui S, Pölsterl S, et al., 2020. 'Squeeze excite' guided few-shot segmentation of volumetric images. Med Image Anal, 59:101587.
[10]Hansen S, Gautam S, Jenssen R, et al., 2022. Anomaly detection-inspired few-shot medical image segmentation through self-supervision with supervoxels. Med Image Anal, 78:102385.
[11]Hu S, Worrall D, Knegt S, et al., 2019. Supervised uncertainty quantification for segmentation with multiple annotations. Proc 22nd Int Conf on Medical Image Computing and Computer Assisted Intervention, p.137-145.
[12]Huang SQ, Xu TF, Shen N, et al., 2023. Rethinking few-shot medical segmentation: a vector quantization view. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3072-3081.
[13]Huang WD, Xiao B, Hu JW, et al., 2023. Location-aware Transformer network for few-shot medical image segmentation. Proc IEEE Int Conf on Bioinformatics and Biomedicine, p.1150-1157.
[14]Huang WD, Hu JW, Bi XL, et al., 2024. Anatomical prior guided spatial contrastive learning for few-shot medical image segmentation. Proc 32nd ACM Int Conf on Multimedia, p.5211-5220.
[15]Huang WD, Hu JW, Xiao JH, et al., 2025. Prototype-guided graph reasoning network for few-shot medical image segmentation. IEEE Trans Med Imag, 44(2):761-773.
[16]Isensee F, Jaeger PF, Kohl SAA, et al., 2021. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods, 18(2):203-211.
[17]Kavur AE, Gezer NS, Barış M, et al., 2021. CHAOS challenge-combined (CT-MR) healthy abdominal organ segmentation. Med Image Anal, 69:101950.
[18]Li JQ, Wang Z, Zhu SL, 2024. Mixed informed Transformer for few-shot medical image segmentation. Proc IEEE Int Conf on Acoustics, Speech and Signal Processing, p.1501-1505.
[19]Lin Y, Chen YF, Cheng KT, et al., 2023. Few shot medical image segmentation with cross attention Transformer. Proc 26th Int Conf on Medical Image Computing and Computer Assisted Intervention, p.233-243.
[20]Liu JR, Yang H, Zhou HY, et al., 2024. Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining. Proc 27th Int Conf on Medical Image Computing and Computer Assisted Intervention, p.615-625.
[21]Liu Z, Lin YT, Cao Y, et al., 2021. Swin Transformer: hierarchical vision Transformer using shifted windows. Proc IEEE/CVF Int Conf on Computer Vision, p.9992-10002.
[22]Long J, Shelhamer E, Darrell T, 2015. Fully convolutional networks for semantic segmentation. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.3431-3440.
[23]Luo XD, Chen JN, Song T, et al., 2022a. Semi-supervised medical image segmentation through dual-task consistency. Proc 25th AAAI Conf on Artificial Intelligence, p.8801-8809.
[24]Luo XD, Wang GT, Liao WJ, et al., 2022b. Semi-supervised medical image segmentation via uncertainty rectified pyramid consistency. Med Image Anal, 80:102517.
[25]Ma J, Chen JN, Ng M, et al., 2021. Loss odyssey in medical image segmentation. Med Image Anal, 71:102035.
[26]Ma J, Li FF, Wang B, 2024. U-Mamba: enhancing long-range dependency for biomedical image segmentation. https://arxiv.org/abs/2401.04722
[27]Marín D, Aquino A, Gegundez-Arias ME, et al., 2011. A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features. IEEE Trans Med Imag, 30(1):146-158.
[28]Ouyang C, Biffi C, Chen C, et al., 2020. Self-supervision with superpixels: training few-shot medical image segmentation without annotation. Proc 16th European Conf on Computer Vision, p.762-780.
[29]Pani K, Chawla I, 2024. Synthetic MRI in action: a novel framework in data augmentation strategies for robust multi-modal brain tumor segmentation. Comput Biol Med, 183:109273.
[30]Patel G, Dolz J, 2021. Weakly supervised segmentation with cross-modality equivariant constraints. Med Image Anal, 77:102374.
[31]Qu H, Wu PX, Huang QY, et al., 2019. Weakly supervised deep nuclei segmentation using points annotation in histopathology images. Proc 2nd Int Conf on Medical Imaging with Deep Learning, p.390-400.
[32]Shen QQ, Li YN, Jin JY, et al., 2023. Q-Net: query-informed few-shot medical image segmentation. In: Arai K (Ed.), Intelligent Systems and Applications. Springer, Cham, p.610-628.
[33]Shen Y, Fan WS, Wang C, et al., 2024. Dual-guided prototype alignment network for few-shot medical image segmentation. IEEE Trans Instrum Meas, 73:5022513.
[34]Tang YC, Yang D, Li WQ, et al., 2022. Self-supervised pre-training of Swin Transformers for 3D medical image analysis. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.20698-20708.
[35]Teng PR, Liu WJ, Wang XS, et al., 2024. Beyond singular prototype: a prototype splitting strategy for few-shot medical image segmentation. Neurocomputing, 597:127990.
[36]Teng S, Wu JW, Chen YY, et al., 2022. Semi-supervised leukocyte segmentation based on adversarial learning with reconstruction enhancement. IEEE Trans Instrum Meas, 71:5015511.
[37]Wang KX, Liew JH, Zou YT, et al., 2019. PANet: few-shot image semantic segmentation with prototype alignment. Proc IEEE/CVF Int Conf on Computer Vision, p.9196-9205.
[38]Wu XX, Gao ZG, Chen XW, et al., 2024. Support-query prototype fusion network for few-shot medical image segmentation. https://arxiv.org/abs/2405.07516
[39]Zhan GD, Wang WT, Sun HY, et al., 2022. Auto-CSC: a transfer learning based automatic cell segmentation and count framework. Cyborg Bion Syst, 2022(1):9842349.
[40]Zhang GW, Kang GL, Yang Y, et al., 2021. Few-shot segmentation via cycle-consistent Transformer. Proc 34th Annual Conf on Neural Information Processing Systems, p.1-13.
[41]Zhang YM, Li HL, Gao YJ, et al., 2024. Prototype correlation matching and class-relation reasoning for few-shot medical image segmentation. IEEE Trans Med Imag, 43(11):4041-4054.
[42]Zhou ZW, Rahman Siddiquee MM, Tajbakhsh N, et al., 2018. UNet++: a nested U-Net architecture for medical image segmentation. Proc 4th Int Workshop on Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, p.3-11.
[43]Zhu LH, Liao BC, Zhang Q, et al., 2024. Vision Mamba: efficient visual representation learning with bidirectional state space model. Proc 41st Int Conf on Machine Learning, p.1-13.
[44]Zhu YZ, Wang SD, Xin T, et al., 2023. Few-shot medical image segmentation via a region-enhanced prototypical Transformer. Proc 26th Int Conf on Medical Image Computing and Computer Assisted Intervention, p.271-280.
[45]Zhuang XH, 2018. Multivariate mixture model for myocardial segmentation combining multi-source images. IEEE Trans Patt Anal Mach Intell, 41(12):2933-2946.
Open peer comments: Debate/Discuss/Question/Opinion
<1>