Publishing Service

Polishing & Checking

Frontiers of Information Technology & Electronic Engineering

ISSN 2095-9184 (print), ISSN 2095-9230 (online)

Forget less, count better: a domain-incremental self-distillation learning benchmark for lifelong crowd counting

Abstract: Crowd counting has important applications in public safety and pandemic control. A robust and practical crowd counting system has to be capable of continuously learning with the newly incoming domain data in real-world scenarios instead of fitting one domain only. Off-the-shelf methods have some drawbacks when handling multiple domains: (1) the models will achieve limited performance (even drop dramatically) among old domains after training images from new domains due to the discrepancies in intrinsic data distributions from various domains, which is called catastrophic forgetting; (2) the well-trained model in a specific domain achieves imperfect performance among other unseen domains because of domain shift; (3) it leads to linearly increasing storage overhead, either mixing all the data for training or simply training dozens of separate models for different domains when new ones are available. To overcome these issues, we investigate a new crowd counting task in incremental domain training setting called lifelong crowd counting. Its goal is to alleviate catastrophic forgetting and improve the generalization ability using a single model updated by the incremental domains. Specifically, we propose a self-distillation learning framework as a benchmark (forget less, count better, or FLCB) for lifelong crowd counting, which helps the model leverage previous meaningful knowledge in a sustainable manner for better crowd counting to mitigate the forgetting when new data arrive. A new quantitative metric, normalized Backward Transfer (nBwT), is developed to evaluate the forgetting degree of the model in the lifelong learning process. Extensive experimental results demonstrate the superiority of our proposed benchmark in achieving a low catastrophic forgetting degree and strong generalization ability.

Key words: Crowd counting; Knowledge distillation; Lifelong learning

Chinese Summary  <44> 忘得少,数得好:一种域增量式自蒸馏终身人群计数基准

高佳琪1,李婧琦1,单洪明2,3,曲延云4,王则5,王飞跃6,张军平1
1复旦大学计算机科学技术学院上海市智能信息处理重点实验室,中国上海市,200433
2复旦大学类脑智能科学与技术研究院,中国上海市,200433
3上海脑科学与类脑研究中心,中国上海市,201210
4厦门大学信息科学与技术学院,中国厦门市,361005
5宾夕法尼亚州立大学信息科学与技术学院,美国宾夕法尼亚州,16802
6中国科学院自动化研究所复杂系统管理与控制国家重点实验室,中国北京市,100190
摘要:人群计数在公共安全和流行病控制方面具有重要应用。一个鲁棒且实用的人群计数系统须能够在真实场景中不断学习持续到来的新域数据,而非仅仅拟合某一单域的数据分布。现有方法在处理多个域的数据时有一些不足之处:(1)由于来自不同域的固有数据分布之间的差异,模型在训练来自新域的图像数据后在旧域中的性能可能会变得十分有限(甚至急剧下降),这种现象被称为灾难性遗忘;(2)由于域分布的偏移,在某一特定域数据中训练好的模型在其他未见域中通常表现不佳;(3)处理多个域的数据通常会导致存储开销的线性增长,例如混合来自所有域的数据进行训练,或者是简单地为每一个域的数据单独训练一个模型。为克服这些问题,我们探索了在域增量式训练设置下一种新的人群计数任务,即终身人群计数。它的目标是通过使用单个模型持续不断地学习新域数据以减轻灾难性遗忘并提高泛化能力。具体来说,提出一种自蒸馏学习框架作为终身人群计数的基准模型(forget less,count better,FLCB),这有助于模型可持续地利用之前学到的有意义的知识来更好地对人数进行估计,以减少训练新数据后对旧数据的遗忘。此外,设计了一种新的定量评价指标,即归一化后向迁移(normalized Backward Transfer,nBwT),用于评估模型在终身学习过程中的遗忘程度。大量实验结果证明了该模型的优越性,即较低的灾难性遗忘度和较强的泛化能力。

关键词组:人群计数;知识蒸馏;终身学习


Share this article to: More

Go to Contents

References:

<Show All>

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





DOI:

10.1631/FITEE.2200380

CLC number:

TP391

Download Full Text:

Click Here

Downloaded:

2895

Download summary:

<Click Here> 

Downloaded:

537

Clicked:

2293

Cited:

0

On-line Access:

2024-08-27

Received:

2023-10-17

Revision Accepted:

2024-05-08

Crosschecked:

2022-12-26

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952276; Fax: +86-571-87952331; E-mail: jzus@zju.edu.cn
Copyright © 2000~ Journal of Zhejiang University-SCIENCE