Full Text:   <2106>

Summary:  <1669>

CLC number: TP391.4

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2019-06-11

Cited: 0

Clicked: 5975

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Murat Akpulat

http://orcid.org/0000-0001-8469-0034

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2019 Vol.20 No.6 P.849-861

http://doi.org/10.1631/FITEE.1800313


Detecting interaction/complexity within crowd movements using braid entropy


Author(s):  Murat Akpulat, Murat Ek?nc?

Affiliation(s):  Kelkit Ayd?n Do?an Vocational School, Gm?hane University, Gm?hane 29100, Turkey; more

Corresponding email(s):   muratakpulat@gumushane.edu.tr, mekinci@ktu.edu.tr

Key Words:  Crowd behavior, Motion segmentation, Motion entropy, Crowd scene analysis, Complexity detection, Braid entropy


Murat Akpulat, Murat Ek?nc?. Detecting interaction/complexity within crowd movements using braid entropy[J]. Frontiers of Information Technology & Electronic Engineering, 2019, 20(6): 849-861.

@article{title="Detecting interaction/complexity within crowd movements using braid entropy",
author="Murat Akpulat, Murat Ek?nc?",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="20",
number="6",
pages="849-861",
year="2019",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1800313"
}

%0 Journal Article
%T Detecting interaction/complexity within crowd movements using braid entropy
%A Murat Akpulat
%A Murat Ek?nc?
%J Frontiers of Information Technology & Electronic Engineering
%V 20
%N 6
%P 849-861
%@ 2095-9184
%D 2019
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1800313

TY - JOUR
T1 - Detecting interaction/complexity within crowd movements using braid entropy
A1 - Murat Akpulat
A1 - Murat Ek?nc?
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 20
IS - 6
SP - 849
EP - 861
%@ 2095-9184
Y1 - 2019
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1800313


Abstract: 
The segmentation of moving and non-moving regions in an image within the field of crowd analysis is a crucial process in terms of understanding crowd behavior. In many studies, similar movements were segmented according to the location, adjacency to each other, direction, and average speed. However, these segments may not in turn indicate the same types of behavior in each region. The purpose of this study is to better understand crowd behavior by locally measuring the degree of interaction/complexity within the segment. For this purpose, the flow of motion in the image is primarily represented as a series of trajectories. The image is divided into hexagonal cells and the finite time braid entropy (FTBE) values are calculated according to the different projection angles of each cell. These values depend on the complexity of the spiral structure that the trajectories generated throughout the movement and show the degree of interaction among pedestrians. In this study, behaviors of different complexities determined in segments are pictured as similar movements on the whole. This study has been tested on 49 different video sequences from the UCF and CUHK databases.

利用编织熵探测人群运动的相互作用/复杂程度

摘要:在群体分析领域中,图像运动与非运动区域分割对理解大众行为至关重要。在许多研究中,相似的运动可根据位置、彼此邻接、方向及平均速度分割。然而,这样的分割不可能反过来表明各自区域内相同行为类型。本文目的是通过局部测量分割片段内的相互作用/复杂程度,更好地理解大众行为。为此,图像中运动的流动主要由一系列轨迹表达,且图像被分割为诸多六边形单元。根据各单元不同投影角度,可计算出有限时间编织熵(FTBE)值。该值取决于运动轨迹螺旋形结构的复杂性,并展示了行人间相互作用程度。本文将分割片段内所确定的不同复杂度的行为视作整体上的相似运动,测试了49个来自UCF和CUHK数据库的不同视频系列。

关键词:大众行为;运动分割;运动熵;群体场景分析;复杂度检测;编织熵

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Ali S, 2013. Measuring flow complexity in videos. Proc IEEE Int Conf on Computer Vision, p.1097-1104.

[2]Ali S, Shah M, 2007. A Lagrangian particle dynamics approach for crowd flow segmentation and stability analysis. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1-6.]> 10.1109/CVPR.2007.382977 3

[4]Allshouse MR, Thiffeault JL, 2012. Detecting coherent structures using braids. Phys D, 241(2):95-105.

[5]Barron JL, Fleet DJ, Beauchemin SS, 1994. Performance of optical flow techniques. Int J Comput Vis, 12(1):43-77.

[6]Bouguet JY, 2001. Pyramidal implementation of the affine Lucas Kanade feature tracker description of the algorithm. Intel Co, 5(1-10):4.

[7]Brox T, Bruhn A, Papenberg N, et al., 2004. High accuracy optical flow estimation based on a theory for warping. European Conf on Computer Vision, p.25-36.

[8]Budišić M, Thiffeault JL, 2015. Finite-time braiding exponents. Chaos, 25(8):087407.

[9]Chen M, Bärwolff G, Schwandt H, 2009. A derived grid-based model for simulation of pedestrian flow. J Zhejiang Univ-Sci A, 10(2):209-220.

[10]Chen ML, Wang Q, Li XL, 2017. Anchor-based group detection in crowd scenes. IEEE Int Conf on Acoustics, Speech and Signal Processing, p.1378-1382.

[11]Cheriyadat AM, Radke RJ, 2008. Detecting dominant motions in dense crowds. IEEE J Sel Top Signal Process, 2(4):568-581.

[12]de Almeida IR, Cassol VJ, Badler NI, et al., 2017. Detection of global and local motion changes in human crowds. IEEE Trans Circ Syst Video Technol, 27(3):603-612.

[13]Fan ZY, Jiang J, Weng SQ, et al., 2018. Adaptive crowd segmentation based on coherent motion detection. J Signal Process Syst, 90(12):1651-1666.

[14]Fradi H, Luvison B, Pham QC, 2017. Crowd behavior analysis using local mid-level visual descriptors. IEEE Trans Circ Syst Video Technol, 27(3):589-602.

[15]Gao ML, Wang YT, Jiang J, et al., 2017. Crowd motion segmentation via streak flow and collectiveness. Chinese Automation Congress, p.4067-4070.

[16]He GQ, Yang Y, Chen ZH, et al., 2013. A review of behavior mechanisms and crowd evacuation animation in emergency exercises. J Zhejiang Univ-Sci C (Comput & Electron), 14(7):477-485.

[17]Horn BKP, Schunck BG, 1981. Determining optical flow. Artif Intell, 17(1-3):185-203.

[18]Hu M, Ali S, Shah M, 2008a. Detecting global motion patterns in complex videos. Proc 19th Int Conf on Pattern Recognition, p.1-5.

[19]Hu M, Ali S, Shah M, 2008b. Learning motion patterns in crowded scenes using motion flow field. Proc 19th Int Conf on Pattern Recognition, p.1-5.

[20]Hu WM, Tan TN, Wang L, et al., 2004. A survey on visual surveillance of object motion and behaviors. IEEE Trans Syst Man Cybern Part C, 34(3):334-352.

[21]Jodoin PM, Benezeth Y, Wang Y, 2013. Meta-tracking for video scene understanding. Proc 10th IEEE Int Conf on Advanced Video and Signal Based Surveillance, p.1-6.

[22]Junior JCSJ, Musse SR, Jung CR, 2010. Crowd analysis using computer vision techniques. IEEE Signal Process Mag, 27(5):66-77.

[23]Li T, Chang H, Wang M, et al., 2015. Crowded scene analysis: a survey. IEEE Trans Circ Syst Video Technol, 25(3):367-386.

[24]Lin WY, Mi Y, Wang WY, et al., 2016. A diffusion and clustering-based approach for finding coherent motions and understanding crowd scenes. IEEE Trans Image Process, 25(4):1674-1687.

[25]Lucas BD, Kanade T, 1981. An iterative image registration technique with an application to stereo vision. Proc 7th Int Joint Conf on Artificial Intelligence, p.674-679.

[26]Mehran R, Moore BE, Shah M, 2010. A streakline representation of flow in crowded scenes. Proc 11th European Conf on Computer Vision, p.439-452.

[27]Moussafir JO, 2006. On computing the entropy of braids. Funct Anal Other Math, 1(1):37-46.

[28]Rao AS, Gubbi J, Marusic S, et al., 2016. Crowd event detection on optical flow manifolds. IEEE Trans Cybern, 46(7):1524-1537.

[29]Saleemi I, Hartung L, Shah M, 2010. Scene understanding by statistical modeling of motion patterns. Proc IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.2069-2076.

[30]Shao J, Loy CC, Wang XG, 2017. Learning scene-independent group descriptors for crowd understanding. IEEE Trans Circ Syst Video Technol, 27(6):1290-1303.

[31]Thida M, Yong YL, Climent-Pérez P, et al., 2013. A literature review on video analytics of crowded scenes. In: Atrey PK, Kankanhalli MS, Cavallaro A (Eds.), Intelligent Multimedia Surveillance: Current Trends and Research. Springer Berlin, p.17-36.

[32]Thiffeault JL, 2010. Braids of entangled particle trajectories. Chaos, 20(1):017516.

[33]Thiffeault JL, Budisic M, 2014. Braidlab: a software package for braids and loops. https://arxiv.org/abs/1410.0849v2

[34]Wang XF, Yang XM, He XH, et al., 2014. A high accuracy flow segmentation method in crowded scenes based on streakline. Optik, 125(3):924-929.

[35]Wu S, Yang H, Zheng SB, et al., 2017. Crowd behavior analysis via curl and divergence of motion trajectories. Int J Comput Vis, 123(3):499-519.

[36]Yang Y, Liu JE, Shah M, 2009. Video scene understanding using multi-scale analysis. Proc IEEE 12th Int Conf on Computer Vision, p.1669-1676.

[37]Yilmaz A, Javed O, Shah M, 2006. Object tracking: a survey. ACM Comput Surv, 38(4):13.

[38]Yuan ZL, Jia HF, Liao MJ, et al., 2017. Simulation model of self-organizing pedestrian movement considering following behavior. Front Inform Technol Electron Eng, 18(8): 1142-1150.

[39]Zhan BB, Monekosso DN, Remagnino P, et al., 2008. Crowd analysis: a survey. Mach Vis Appl, 19(5-6):345-357.

[40]Zhao XM, Medioni G, 2011. Robust unsupervised motion pattern inference from video and applications. Proc Int Conf on Computer Vision, p.715-722.

[41]Zhao Y, Yuan MQ, Su GF, et al., 2015. Crowd macro state detection using entropy model. Phys A, 431:84-93.

[42]Zhou BL, Wang XG, Tang XO, 2011. Random field topic model for semantic region analysis in crowded scenes from tracklets. Proc IEEE Computer Society Conf on Computer Vision and Pattern Recognition, p.3441-3448.

[43]Zhou BL, Tang XO, Wang XG, 2012a. Coherent filtering: detecting coherent motions from crowd clutters. European Conf on Computer Vision, p.857-871.

[44]Zhou BL, Wang XG, Tang XO, 2012b. Understanding collective crowd behaviors: learning a mixture model of dynamic pedestrian-agents. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.2871-2878.

[45]Zhou BL, Tang XO, Wang XG, 2013. Measuring crowd collectiveness. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.3049-3056.

[46]Zitouni MS, Bhaskar H, Dias J, et al., 2016. Advances and trends in visual crowd analysis: a systematic survey and evaluation of crowd modelling techniques. Neurocomputing, 186:139-159.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE