Full Text:   <767>

CLC number: TP391.41

On-line Access: 2026-01-08

Received: 2025-05-11

Revision Accepted: 2025-09-03

Crosschecked: 2026-01-08

Cited: 0

Clicked: 983

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Haoxiang ZHU

https://orcid.org/0009-0003-6262-4660

Houjin CHEN

https://orcid.org/0000-0002-9247-8495

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2025 Vol.26 No.11 P.2143-2158

http://doi.org/10.1631/FITEE.2500304


Parallel prototype filter and feature refinement for few-shot medical image segmentation


Author(s):  Haoxiang ZHU, Houjin CHEN, Yanfeng LI, Jia SUN, Ziwei CHEN, Jiaxin LI

Affiliation(s):  School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China; more

Corresponding email(s):   25110066@bjtu.edu.cn

Key Words:  Few-shot learning, Medical image segmentation, Prototype filter, State space model


Haoxiang ZHU, Houjin CHEN, Yanfeng LI, Jia SUN, Ziwei CHEN, Jiaxin LI. Parallel prototype filter and feature refinement for few-shot medical image segmentation[J]. Frontiers of Information Technology & Electronic Engineering, 2025, 26(11): 2143-2158.

@article{title="Parallel prototype filter and feature refinement for few-shot medical image segmentation",
author="Haoxiang ZHU, Houjin CHEN, Yanfeng LI, Jia SUN, Ziwei CHEN, Jiaxin LI",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="26",
number="11",
pages="2143-2158",
year="2025",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2500304"
}

%0 Journal Article
%T Parallel prototype filter and feature refinement for few-shot medical image segmentation
%A Haoxiang ZHU
%A Houjin CHEN
%A Yanfeng LI
%A Jia SUN
%A Ziwei CHEN
%A Jiaxin LI
%J Frontiers of Information Technology & Electronic Engineering
%V 26
%N 11
%P 2143-2158
%@ 2095-9184
%D 2025
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2500304

TY - JOUR
T1 - Parallel prototype filter and feature refinement for few-shot medical image segmentation
A1 - Haoxiang ZHU
A1 - Houjin CHEN
A1 - Yanfeng LI
A1 - Jia SUN
A1 - Ziwei CHEN
A1 - Jiaxin LI
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 26
IS - 11
SP - 2143
EP - 2158
%@ 2095-9184
Y1 - 2025
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2500304


Abstract: 
medical image segmentation is critical for clinical diagnosis, but the scarcity of annotated data limits robust model training, making few-shot learning indispensable. Existing methods often suffer from two issues—performance degradation due to significant inter-class variations in pathological structures, and overreliance on attention mechanisms with high computational complexity (O(n²)), which hinders the efficient modeling of long-range dependencies. In contrast, the state space model (SSM) offers linear complexity (O(n)) and superior efficiency, making it a key solution. To address these challenges, we propose PPFFR (parallel prototype filter and feature refinement) for few-shot medical image segmentation. The proposed framework comprises three key modules. First, we propose the prototype refinement (PR) module to construct refined class subgraphs from encoder-extracted features of both support and query images, which generates support prototypes with minimized inter-class variation. We then propose the parallel prototype filter (PPF) module to suppress background interference and enhance the correlation between support and query prototypes. Finally, we implement the feature refinement (FR) module to further enhance segmentation accuracy and accelerate model convergence with SSM’s robust long-range dependency modeling capability, integrated with multi-head attention (MHA) to preserve spatial details. Experimental results on the Abd-MRI dataset demonstrate that FR with MHA outperforms FR alone in segmenting the left kidney, right kidney, liver, and spleen, and in terms of mean accuracy, confirming MHA’s role in improving precision. In extensive experiments conducted on three public datasets under the 1-way 1-shot setting, PPFFR achieves Dice scores of 87.62%, 86.74%, and 79.71% separately, consistently surpassing state-of-the-art few-shot medical image segmentation methods. As the critical component, SSM ensures that PPFFR balances performance with efficiency. Ablation studies validate the effectiveness of the PR, PPF, and FR modules. The results indicate that explicit inter-class variation reduction and SSM-based feature refinement can enhance accuracy without heavy computational overhead. In conclusion, PPFFR effectively enhances inter-class consistency and computational efficiency for few-shot medical image segmentation. This work provides insights for few-shot learning in medical imaging and inspires lightweight architecture designs for clinical deployment.

并行原型滤波与特征细化在少样本医学图像分割中的应用

朱浩翔1,陈后金1,李艳凤1,孙嘉1,陈紫微2,李家欣2
1北京交通大学电子信息工程学院,中国北京市,100044
2河南投资集团有限公司,中国郑州市,450008
摘要:医学图像分割在临床诊断中至关重要,但标注数据的稀缺性限制了模型的鲁棒性训练,使得少样本学习不可或缺。现有方法常面临两个问题:一是病理结构存在显著的类间差异,导致性能下降;二是过度依赖具有高计算复杂度(O(n2))的注意力机制,阻碍了长距离依赖关系的高效建模。相比之下,状态空间模型具有线性复杂度(O(n))和更高效率,成为关键解决方案。为应对这些挑战,我们提出用于少样本医学图像分割的并行原型滤波与特征细化框架(PPFFR)。该框架包含3个关键模块。首先,提出原型优化模块,通过从支持和查询图像的编码特征构建精细的类别子图,生成类间差异最小化的支持原型。随后,设计并行原型滤波模块,以抑制背景干扰并增强支持原型与查询原型之间的相关性。最后,实现特征细化模块,结合状态空间模型强大的长距离依赖建模能力与多头注意力机制对空间细节的保留,进一步提升分割精度并加速模型收敛。在腹部MRI数据集上的实验结果表明,结合多头注意力的特征细化模块在左肾、右肾、肝脏和脾脏的分割任务中均优于单独使用特征细化模块,平均精度也有所提升,证实了多头注意力对提高精度的作用。在1-way 1-shot设置下对3个公共数据集进行的广泛实验中,PPFFR分别取得87.62%、86.74%和79.71%的Dice分数,优于当前最先进的少样本医学图像分割方法。作为核心组件,状态空间模型确保PPFFR在性能与效率间的平衡。消融实验验证了原型优化、并行原型滤波和特征细化模块的有效性。结果表明,类间差异显著减小与基于状态空间模型的特征细化能够在不过度增加计算开销的前提下提升精度。综上所述,PPFFR有效提升了少样本医学图像分割的类间一致性与计算效率。本研究为医学影像领域的少样本学习提供了新思路,并为临床部署的轻量化架构设计带来启示。

关键词:少样本学习;医学图像分割;原型滤波;状态空间模型

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Aurenhammer F, 1991. Voronoi diagrams—a survey of a fundamental geometric data structure. ACM Comput Surv, 23(3):345-405.

[2]Awudong B, Li Q, Liang ZL, et al., 2024. Attentional adversarial training for few-shot medical image segmentation without annotations. PLoS ONE, 19(5):e0298227.

[3]Cheng YR, Zheng YJ, Wang JX, 2025. CFNet: automatic multi-modal brain tumor segmentation through hierarchical coarse-to-fine fusion and feature communication. Biomed Signal Process Contr, 99:106876.

[4]Cheng ZM, Wang SD, Xin T, et al., 2024. Few-shot medical image segmentation via generating multiple representative descriptors. IEEE Trans Med Imag, 43(6):2202-2214.

[5]Dong NQ, Xing EP, 2018. Few-shot semantic segmentation with prototype learning. Proc British Machine Vision Conf, p.1-13.

[6]Fraz MM, Remagnino P, Hoppe A, et al., 2012. An ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Trans Biomed Eng, 59(9):2538-2548.

[7]Greenwald NF, Miller G, Moen E, et al., 2021. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning. Nat Biotechnol, 40(4):555-565.

[8]Gu A, Dao T, 2023. Mamba: linear-time sequence modeling with selective state spaces. https://arxiv.org/abs/2312.00752

[9]Guha Roy A, Siddiqui S, Pölsterl S, et al., 2020. 'Squeeze excite' guided few-shot segmentation of volumetric images. Med Image Anal, 59:101587.

[10]Hansen S, Gautam S, Jenssen R, et al., 2022. Anomaly detection-inspired few-shot medical image segmentation through self-supervision with supervoxels. Med Image Anal, 78:102385.

[11]Hu S, Worrall D, Knegt S, et al., 2019. Supervised uncertainty quantification for segmentation with multiple annotations. Proc 22nd Int Conf on Medical Image Computing and Computer Assisted Intervention, p.137-145.

[12]Huang SQ, Xu TF, Shen N, et al., 2023. Rethinking few-shot medical segmentation: a vector quantization view. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3072-3081.

[13]Huang WD, Xiao B, Hu JW, et al., 2023. Location-aware Transformer network for few-shot medical image segmentation. Proc IEEE Int Conf on Bioinformatics and Biomedicine, p.1150-1157.

[14]Huang WD, Hu JW, Bi XL, et al., 2024. Anatomical prior guided spatial contrastive learning for few-shot medical image segmentation. Proc 32nd ACM Int Conf on Multimedia, p.5211-5220.

[15]Huang WD, Hu JW, Xiao JH, et al., 2025. Prototype-guided graph reasoning network for few-shot medical image segmentation. IEEE Trans Med Imag, 44(2):761-773.

[16]Isensee F, Jaeger PF, Kohl SAA, et al., 2021. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods, 18(2):203-211.

[17]Kavur AE, Gezer NS, Barış M, et al., 2021. CHAOS challenge-combined (CT-MR) healthy abdominal organ segmentation. Med Image Anal, 69:101950.

[18]Li JQ, Wang Z, Zhu SL, 2024. Mixed informed Transformer for few-shot medical image segmentation. Proc IEEE Int Conf on Acoustics, Speech and Signal Processing, p.1501-1505.

[19]Lin Y, Chen YF, Cheng KT, et al., 2023. Few shot medical image segmentation with cross attention Transformer. Proc 26th Int Conf on Medical Image Computing and Computer Assisted Intervention, p.233-243.

[20]Liu JR, Yang H, Zhou HY, et al., 2024. Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining. Proc 27th Int Conf on Medical Image Computing and Computer Assisted Intervention, p.615-625.

[21]Liu Z, Lin YT, Cao Y, et al., 2021. Swin Transformer: hierarchical vision Transformer using shifted windows. Proc IEEE/CVF Int Conf on Computer Vision, p.9992-10002.

[22]Long J, Shelhamer E, Darrell T, 2015. Fully convolutional networks for semantic segmentation. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.3431-3440.

[23]Luo XD, Chen JN, Song T, et al., 2022a. Semi-supervised medical image segmentation through dual-task consistency. Proc 25th AAAI Conf on Artificial Intelligence, p.8801-8809.

[24]Luo XD, Wang GT, Liao WJ, et al., 2022b. Semi-supervised medical image segmentation via uncertainty rectified pyramid consistency. Med Image Anal, 80:102517.

[25]Ma J, Chen JN, Ng M, et al., 2021. Loss odyssey in medical image segmentation. Med Image Anal, 71:102035.

[26]Ma J, Li FF, Wang B, 2024. U-Mamba: enhancing long-range dependency for biomedical image segmentation. https://arxiv.org/abs/2401.04722

[27]Marín D, Aquino A, Gegundez-Arias ME, et al., 2011. A new supervised method for blood vessel segmentation in retinal images by using gray-level and moment invariants-based features. IEEE Trans Med Imag, 30(1):146-158.

[28]Ouyang C, Biffi C, Chen C, et al., 2020. Self-supervision with superpixels: training few-shot medical image segmentation without annotation. Proc 16th European Conf on Computer Vision, p.762-780.

[29]Pani K, Chawla I, 2024. Synthetic MRI in action: a novel framework in data augmentation strategies for robust multi-modal brain tumor segmentation. Comput Biol Med, 183:109273.

[30]Patel G, Dolz J, 2021. Weakly supervised segmentation with cross-modality equivariant constraints. Med Image Anal, 77:102374.

[31]Qu H, Wu PX, Huang QY, et al., 2019. Weakly supervised deep nuclei segmentation using points annotation in histopathology images. Proc 2nd Int Conf on Medical Imaging with Deep Learning, p.390-400.

[32]Shen QQ, Li YN, Jin JY, et al., 2023. Q-Net: query-informed few-shot medical image segmentation. In: Arai K (Ed.), Intelligent Systems and Applications. Springer, Cham, p.610-628.

[33]Shen Y, Fan WS, Wang C, et al., 2024. Dual-guided prototype alignment network for few-shot medical image segmentation. IEEE Trans Instrum Meas, 73:5022513.

[34]Tang YC, Yang D, Li WQ, et al., 2022. Self-supervised pre-training of Swin Transformers for 3D medical image analysis. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.20698-20708.

[35]Teng PR, Liu WJ, Wang XS, et al., 2024. Beyond singular prototype: a prototype splitting strategy for few-shot medical image segmentation. Neurocomputing, 597:127990.

[36]Teng S, Wu JW, Chen YY, et al., 2022. Semi-supervised leukocyte segmentation based on adversarial learning with reconstruction enhancement. IEEE Trans Instrum Meas, 71:5015511.

[37]Wang KX, Liew JH, Zou YT, et al., 2019. PANet: few-shot image semantic segmentation with prototype alignment. Proc IEEE/CVF Int Conf on Computer Vision, p.9196-9205.

[38]Wu XX, Gao ZG, Chen XW, et al., 2024. Support-query prototype fusion network for few-shot medical image segmentation. https://arxiv.org/abs/2405.07516

[39]Zhan GD, Wang WT, Sun HY, et al., 2022. Auto-CSC: a transfer learning based automatic cell segmentation and count framework. Cyborg Bion Syst, 2022(1):9842349.

[40]Zhang GW, Kang GL, Yang Y, et al., 2021. Few-shot segmentation via cycle-consistent Transformer. Proc 34th Annual Conf on Neural Information Processing Systems, p.1-13.

[41]Zhang YM, Li HL, Gao YJ, et al., 2024. Prototype correlation matching and class-relation reasoning for few-shot medical image segmentation. IEEE Trans Med Imag, 43(11):4041-4054.

[42]Zhou ZW, Rahman Siddiquee MM, Tajbakhsh N, et al., 2018. UNet++: a nested U-Net architecture for medical image segmentation. Proc 4th Int Workshop on Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, p.3-11.

[43]Zhu LH, Liao BC, Zhang Q, et al., 2024. Vision Mamba: efficient visual representation learning with bidirectional state space model. Proc 41st Int Conf on Machine Learning, p.1-13.

[44]Zhu YZ, Wang SD, Xin T, et al., 2023. Few-shot medical image segmentation via a region-enhanced prototypical Transformer. Proc 26th Int Conf on Medical Image Computing and Computer Assisted Intervention, p.271-280.

[45]Zhuang XH, 2018. Multivariate mixture model for myocardial segmentation combining multi-source images. IEEE Trans Patt Anal Mach Intell, 41(12):2933-2946.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2026 Journal of Zhejiang University-SCIENCE