CLC number: TP24
On-line Access: 2019-01-30
Received: 2018-09-20
Revision Accepted: 2018-11-26
Crosschecked: 2019-01-08
Cited: 0
Clicked: 6014
Ning-shi Yao, Qiu-yang Tao, Wei-yu Liu, Zhen Liu, Ye Tian, Pei-yu Wang, Timothy Li, Fumin Zhang. Autonomous flying blimp interaction with human in an indoor space[J]. Frontiers of Information Technology & Electronic Engineering, 2019, 20(1): 45-59.
@article{title="Autonomous flying blimp interaction with human in an indoor space",
author="Ning-shi Yao, Qiu-yang Tao, Wei-yu Liu, Zhen Liu, Ye Tian, Pei-yu Wang, Timothy Li, Fumin Zhang",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="20",
number="1",
pages="45-59",
year="2019",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1800587"
}
%0 Journal Article
%T Autonomous flying blimp interaction with human in an indoor space
%A Ning-shi Yao
%A Qiu-yang Tao
%A Wei-yu Liu
%A Zhen Liu
%A Ye Tian
%A Pei-yu Wang
%A Timothy Li
%A Fumin Zhang
%J Frontiers of Information Technology & Electronic Engineering
%V 20
%N 1
%P 45-59
%@ 2095-9184
%D 2019
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1800587
TY - JOUR
T1 - Autonomous flying blimp interaction with human in an indoor space
A1 - Ning-shi Yao
A1 - Qiu-yang Tao
A1 - Wei-yu Liu
A1 - Zhen Liu
A1 - Ye Tian
A1 - Pei-yu Wang
A1 - Timothy Li
A1 - Fumin Zhang
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 20
IS - 1
SP - 45
EP - 59
%@ 2095-9184
Y1 - 2019
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1800587
Abstract: We present the Georgia Tech Miniature Autonomous Blimp (GT-MAB), which is designed to support human-robot interaction experiments in an indoor space for up to two hours. GT-MAB is safe while flying in close proximity to humans. It is able to detect the face of a human subject, follow the human, and recognize hand gestures. GT-MAB employs a deep neural network based on the single shot multibox detector to jointly detect a human user's face and hands in a real-time video stream collected by the onboard camera. A human-robot interaction procedure is designed and tested with various human users. The learning algorithms recognize two hand waving gestures. The human user does not need to wear any additional tracking device when interacting with the flying blimp. Vision-based feedback controllers are designed to control the blimp to follow the human and fly in one of two distinguishable patterns in response to each of the two hand gestures. The blimp communicates its intentions to the human user by displaying visual symbols. The collected experimental data show that the visual feedback from the blimp in reaction to the human user significantly improves the interactive experience between blimp and human. The demonstrated success of this procedure indicates that GT-MAB could serve as a flying robot that is able to collect human data safely in an indoor environment.
[1]Acharya U, Bevins A, Duncan BA, 2017. Investigation of human-robot comfort with a small unmanned aerial vehicle compared to a ground robot. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.2758-2765.
[2]Arroyo D, Lucho C, Roncal J, et al., 2014. Daedalus: a sUAV for human-robot interaction. Proc 9$^rm th$ ACM/IEEE Int Conf on Human-Robot Interaction, p.116-117.
[3]KLT: an Implementation of the Kanade-Lucas-Tomasi Feature Tracker.
[4]Burri M, Gasser L, Käch M, et al., 2013. Design and control of a spherical omnidirectional blimp. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.1873-1879.
[5]Cauchard JR, Zhai KY, Spadafora M, et al., 2016. Emotion encoding in human-drone interaction. Proc 11th ACM/IEEE Int Conf on Human-Robot Interaction, p.263-270.
[6]Cho S, Mishra V, Tao Q, et al., 2017. Autopilot design for a class of miniature autonomous blimps. Proc IEEE Conf on Control Technology and Applications, p.841-846.
[7]Corke P, 2011. Robotics, Vision and Control: Fundamental Algorithms in MATLAB. Springer Berlin Germany.
[8]Costante G, Bellocchio E, Valigi P, et al., 2014. Personalizing vision-based gestural interfaces for HRI with UAVs: a transfer learning approach. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.3319-3326.
[9]de Crescenzio F, Miranda G, Persiani F, et al., 2009. A first implementation of an advanced 3D interface to control and supervise UAV (uninhabited aerial vehicles) missions. emphPresence, 18(3):171-184.
[10]Draper M, Calhoun G, Ruff H, et al., 2003. Manual versus speech input for unmanned aerial vehicle control station operations. emphProc Hum Factors Ergon Soc Ann Meet, 47(1):109-113.
[11]Duffy BR, 2003. Anthropomorphism and the social robot. emphRob Auton Syst, 42(3-4):177-190.
[12]Duncan BA, Murphy RR, 2013. Comfortable approach distance with small unmanned aerial vehicles. Proc IEEE RO-MAN, p.786-792.
[13]Goodrich MA, Schultz AC, 2007. Human-robot interaction: a survey. emphFound Trends Hum-Comput Interact, 1(3):203-275.
[14]Graether E, Mueller F, 2012. Joggobot: a flying robot as jogging companion. Proc ACM SIGCHI Conf on Human Factors in Computing Systems, p.1063-1066.
[15]Hall ET, 1966. The Hidden Dimension. Doubleday, New York, USA.
[16]Hansen JP, Alapetite A, MacKenzie IS, et al., 2014. The use of gaze to control drones. Proc Symp on Eye Tracking Research and Applications, p.27-34.
[17]He D, Ren HY, Hua WD, et al., 2011. Flyingbuddy: augment human mobility and perceptibility. Proc 13th Int Conf on Ubiquitous Computing, p.615-616.
[18]Helbing D, Molnár P, 1995. Social force model for pedestrian dynamics. emphPhys Rev E, 51(5):4282-4286.
[19]Lichtenstern M, Frassl M, Perun B, et al., 2012. A prototyping environment for interaction between a human and a robotic multi-agent system. Proc 7th ACM/IEEE Int Conf on Human-Robot Interaction, p.185-186.
[20]Liew CF, Yairi T, 2013. Quadrotor or blimp? Noise and appearance considerations in designing social aerial robot. Proc 8th ACM/IEEE Int Conf on Human-Robot Interaction, p.183-184.
[21]Lim H, Sinha SN, 2015. Monocular localization of a moving person onboard a quadrotor MAV. Proc IEEE Int Conf on Robotics and Automation, p.2182-2189.
[22]Liu W, Anguelov D, Erhan D, et al., 2016. SSD: single shot multibox detector. Proc 14th European Conf on Computer Vision, p.21-37.
[23]Mittal A, Zisserman A, Torr PHS, 2011. Hand detection using multiple proposals. Proc British Machine Vision Conf, p.1-11.
[24]Monajjemi VM, Wawerla J, Vaughan R, et al., 2013. HRI in the sky: creating and commanding teams of UAVs with a vision-mediated gestural interface. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.617-623.
[25]Monajjemi VM, Mohaimenianpour S, Vaughan R, 2016. UAV, come to me: end-to-end, multi-scale situated HRI with an uninstrumented human and a distant UAV. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.4410-4417.
[26]Nagi J, Giusti A, di Caro GA, et al., 2014. Human control of UAVs using face pose estimates and hand gestures. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.252-253.
[27]Naseer T, Sturm J, Cremers D, 2013. FollowMe: person following and gesture recognition with a quadrocopter. Proc IEEE/RSJ Int Conf on Intelligent Robots and Systems, p.624-630.
[28]Perera AG, Law YW, Chahl J, 2018. Human pose and path estimation from aerial video using dynamic classifier selection. Cogn Comput, 6(10):1019-1041.
[29]Peshkova E, Hitz M, Kaufmann B, 2017. Natural interaction techniques for an unmanned aerial vehicle system. emphIEEE Perv Comput, 16(1):34-42.
[30]Pourmehr S, Monajjemi VM, Sadat SA, et al., 2014. “You are green”: a touch-to-name interaction in an integrated multi-modal multi-robot HRI system. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.266-267.
[31]Schneegass S, Alt F, Scheible J, et al., 2014. Midair displays: concept and first experiences with free-floating pervasive displays. Proc Int Symp on Pervasive Displays, Article 27.
[32]Sharma M, Hildebrandt D, Newman G, et al., 2013. Communicating affect via flight path: exploring use of the Laban effort system for designing affective locomotion paths. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.293-300.
[33]Srisamosorn V, Kuwahara N, Yamashita A, et al., 2016. Design of face tracking system using fixed 360-degree cameras and flying blimp for health care evaluation. Proc 4$^textrm{th}$ Int Conf on Serviceology.
[34]St-Onge D, Br‘eches PY, Sharf I, et al., 2017. Control, localization and human interaction with an autonomous lighter-than-air performer. emph Rob Auton Syst, 88:165-186.
[35]Szafir D, Mutlu B, Fong T, 2014. Communication of intent in assistive free flyers. Proc ACM/IEEE Int Conf on Human-Robot Interaction, p.358-365.
[36]Szafir D, Mutlu B, Fong T, 2015. Communicating directionality in flying robots. Proc 10th Annual ACM/IEEE Int Conf on Human-Robot Interaction, p.19-26.
[37]Tao QY, Cha J, Hou MX, et al., 2018. Parameter identification of blimp dynamics through swinging motion. Proc 15th Int Conf on Control, Automation, Robotics and Vision.
[38]Viola P, Jones MJ, 2004. Robust real-time face detection. emph Int J Comput Vis, 57(2):137-154.
[39]Wold S, Esbensen K, Geladi P, 1987. Principal component analysis. emphChemom Intell Lab Syst, 2(1-3):37-52.
[40]Yao NS, Anaya E, Tao QY, et al., 2017. Monocular vision-based human following on miniature robotic blimp. Proc IEEE Int Conf on Robotics and Automation, p.3244-3249.
Open peer comments: Debate/Discuss/Question/Opinion
<1>