Full Text:   <3151>

Summary:  <2254>

CLC number: TP391

On-line Access: 2017-01-20

Received: 2016-12-12

Revision Accepted: 2016-12-26

Crosschecked: 2016-12-26

Cited: 2

Clicked: 8232

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Yong-hong Tian

http://orcid.org/0000-0002-2978-5935

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2017 Vol.18 No.1 P.58-67

http://doi.org/10.1631/FITEE.1601804


Towards human-like and transhuman perception in AI 2.0: a review


Author(s):  Yong-hong Tian, Xi-lin Chen, Hong-kai Xiong, Hong-liang Li, Li-rong Dai, Jing Chen, Jun-liang Xing, Jing Chen, Xi-hong Wu, Wei-min Hu, Yu Hu, Tie-jun Huang, Wen Gao

Affiliation(s):  School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China; more

Corresponding email(s):   yhtian@pku.edu.cn, tjhuang@pku.edu.cn

Key Words:  Intelligent perception, Active vision, Auditory perception, Speech perception, Autonomous learning


Yong-hong Tian, Xi-lin Chen, Hong-kai Xiong, Hong-liang Li, Li-rong Dai, Jing Chen, Jun-liang Xing, Jing Chen, Xi-hong Wu, Wei-min Hu, Yu Hu, Tie-jun Huang, Wen Gao. Towards human-like and transhuman perception in AI 2.0: a review[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(1): 58-67.

@article{title="Towards human-like and transhuman perception in AI 2.0: a review",
author="Yong-hong Tian, Xi-lin Chen, Hong-kai Xiong, Hong-liang Li, Li-rong Dai, Jing Chen, Jun-liang Xing, Jing Chen, Xi-hong Wu, Wei-min Hu, Yu Hu, Tie-jun Huang, Wen Gao",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="18",
number="1",
pages="58-67",
year="2017",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1601804"
}

%0 Journal Article
%T Towards human-like and transhuman perception in AI 2.0: a review
%A Yong-hong Tian
%A Xi-lin Chen
%A Hong-kai Xiong
%A Hong-liang Li
%A Li-rong Dai
%A Jing Chen
%A Jun-liang Xing
%A Jing Chen
%A Xi-hong Wu
%A Wei-min Hu
%A Yu Hu
%A Tie-jun Huang
%A Wen Gao
%J Frontiers of Information Technology & Electronic Engineering
%V 18
%N 1
%P 58-67
%@ 2095-9184
%D 2017
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1601804

TY - JOUR
T1 - Towards human-like and transhuman perception in AI 2.0: a review
A1 - Yong-hong Tian
A1 - Xi-lin Chen
A1 - Hong-kai Xiong
A1 - Hong-liang Li
A1 - Li-rong Dai
A1 - Jing Chen
A1 - Jun-liang Xing
A1 - Jing Chen
A1 - Xi-hong Wu
A1 - Wei-min Hu
A1 - Yu Hu
A1 - Tie-jun Huang
A1 - Wen Gao
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 18
IS - 1
SP - 58
EP - 67
%@ 2095-9184
Y1 - 2017
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1601804


Abstract: 
Perception is the interaction interface between an intelligent system and the real world. Without sophisticated and flexible perceptual capabilities, it is impossible to create advanced artificial intelligence (AI) systems. For the next-generation AI, called ‘AI 2.0’, one of the most significant features will be that AI is empowered with intelligent perceptual capabilities, which can simulate human brain’s mechanisms and are likely to surpass human brain in terms of performance. In this paper, we briefly review the state-of-the-art advances across different areas of perception, including visual perception, auditory perception, speech perception, and perceptual information processing and learning engines. On this basis, we envision several R&D trends in intelligent perception for the forthcoming era of AI 2.0, including: (1) human-like and transhuman active vision; (2) auditory perception and computation in an actual auditory setting; (3) speech perception and computation in a natural interaction setting; (4) autonomous learning of perceptual information; (5) large-scale perceptual information processing and learning platforms; and (6) urban omnidirectional intelligent perception and reasoning engines. We believe these research directions should be highlighted in the future plans for AI 2.0.

AI2.0时代的类人与超人感知:研究综述与趋势展望

概要:感知是智能系统与现实世界的交互界面。如果没有复杂而灵活的感知能力,就不可能创造出高级的人工智能(Artificial intelligence, AI)系统。最近,潘云鹤院士提出了AI2.0的概念,其最重要的特征就是未来的AI系统应拥有类人甚至超人的智能感知能力。本文简要回顾了不同智能感知领域的研究现状,包括视觉感知、听觉感知、言语感知、感知信息处理与学习引擎等方面。在此基础上,论文对即将到来的AI 2.0时代智能感知领域需要大力研究发展的重点方向进行了展望,包括:(1)类人和超人的主动视觉;(2)自然声学场景的听知觉感知;(3)自然交互环境的言语感知及计算;(4)面向媒体感知的自主学习;(5)大规模感知信息处理与学习引擎;(6)城市全维度智能感知推理引擎。这些研究方向应在未来AI2.0的研究规划中进行重点布局。

关键词:智能感知;主动视觉;听觉感知;言语感知;自主学习

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Amodei, D., Anubhai, R., Battenberg, E., et al., 2015. Deep Speech 2: end-to-end speech recognition in English and Mandarin. arXiv:1512.02595.

[2]Bear, M.F., Connors, B.W., Paradiso, M.A., 2001. Neuroscience. Lippincott Williams and Wilkins, Maryland, p.208.

[3]Bruna, J., Mallat, S., 2013. Invariant scattering convolution networks. IEEE Trans. Patt. Anal. Mach. Intell., 35(8): 1872-1886.

[4]Candès, E., Romberg, J., Tao, T., 2006. Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489-509.

[5]Deng, J., Dong, W., Socher, R., et al., 2009. ImageNet: a large-scale hierarchical image database. IEEE Conf. on Computer Vision and Pattern Recognition, p.248-255.

[6]Duarte, M., Davenport, M., Takhar, D., et al., 2008. Single-pixel imaging via compressive sampling. IEEE Signal Proc. Mag., 25(2):83-91.

[7]Han, J., Shao, L., Xu, D., et al., 2013. Enhanced computer vision with Microsoft Kinect sensor: a review. IEEE Trans. Cybern., 43(5):1318-1334.

[8]Hinton, G., Deng, L., Yu, D., et al., 2012. Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Proc. Mag., 29(6):82-97.

[9]Hochreiter, S., Schmidhuber, J., 1997. Long short-term memory. Neur. Comput., 9(8):1735-1780.

[10]Hou, Y.Z., Jiao, L.F., 2014. Survey of smart city construction study from home and abroad. Ind. Sci. Trib., 13(24):94-97 (in Chinese).

[11]Jiang, H., Huang, G., Wilford, P., 2014. Multi-view in lensless compressive imaging. Apsipa Trans. Signal Inform. Proc., 3(15):1-10.

[12]Kadambi, A., Whyte, R., Bhandari, A., et al., 2013. Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles. ACM Trans. Graph., 32(6):1-10.

[13]Kale, P.V., Sharma, S.D., 2014. A review of securing home using video surveillance. Int. J. Sci. Res., 3(5):1150-1154.

[14]Kendrick, K.M., 1998. Intelligent perception. Appl. Animal Behav. Sci., 57(3-4):213-231.

[15]King, S., 2014. Measuring a decade of progress in text-to-speech. Loquens, 1(1):e006.

[16]Krizhevsk, A., Sutskever, I., Hinton, G., 2012. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, p.1097-1105.

[17]Lacey, G., Taylor, G.W., Areibi, S., 2016. Deep learning on FPGAs: past, present, and future. arXiv:1602.04283.

[18]LeCun, Y., Bengio, Y., Hinton, G., 2015. Deep learning. Nature, 521(7553):436-444.

[19]Li, T., Chang, H., Wang, M., et al., 2015. Crowded scene analysis: a survey. IEEE Trans. Circ. Syst. Video Technol., 25(3):367-386.

[20]Ling, Z.H., Kang, S.Y., Zen, H., et al., 2015. Deep learning for acoustic modeling in parametric speech generation: a systematic review of existing techniques and future trends. IEEE Signal Proc. Mag., 32(3):35-52.

[21]Lippmann, R.P., 1997. Speech recognition by machines and humans. Speech Commun., 22(1):1-15.

[22]Litovsky, R.Y., Colburn, H.S., Yost, W.A., et al., 1999. The precedence effect. J. Acoust. Soc. Am., 106:1633-1654.

[23]Mahendran, A., Vedaldi, A., 2015. Understanding deep image representations by inverting them. IEEE Int. Conf. on Computer Vision Pattern Recognition, p.5188-5196.

[24]Makhoul, J., 2016. A 50-year retrospective on speech and language processing. Int. Conf. on Interspeech, p.1.

[25]Mattys, S.L., Davis, M.H., Bradlow, A.R., et al., 2012. Speech recognition in adverse conditions: a review. Lang. Cogn. Proc., 27:953-978.

[26]McMackin, L., Herman, M.A., Chatterjee, B., et al., 2012. A high-resolution SWIR camera via compressed sensing. SPIE, 8353:835303.

[27]Mountcastle, V., 1978. An organizing principle for cerebral function: the unit model and the distributed system. In: Edelman, G.M., Mountcastle, V.B. (Eds.), The Mindful Brain. MIT Press, Cambridge.

[28]Musialski, P., Wonka, P., Aliaga, D.G., et al., 2013. A survey of urban reconstruction. Comput. Graph. Forum, 32(6): 146-177.

[29]Ngiam, J., Khosla, A., Kim, M., et al., 2011. Multimodal deep learning. 28th In. Conf. on Machine Learning, p.689-696.

[30]Niwa, K., Koizumi, Y., Kawase, T., et al., 2016. Pinpoint extraction of distant sound source based on DNN mapping from multiple beamforming outputs to prior SNR. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, p.435-439.

[31]Oord, A., Dieleman, S., Zen, H., et al., 2016. WaveNet: a generative model for raw audio. arXiv:1609.03499.

[32]Pan, Y.H., 2016. Heading toward artificial intelligence 2.0. Engineering, 2(4):409-413.

[33]Pratt, G., Manzo, J., 2013. The DARPA robotics challenge. IEEE Robot. Autom. Mag., 20(2):10-12.

[34]Priano, F.H., Armas, R.L., Guerra, C.F., 2016. A model for the smart development of island territories. Int. Conf. on Digital Government Research, p.465-474.

[35]Raina, R., Battle, A., Lee, H., et al., 2007. Self-taught learning: transfer learning from unlabeled data. 24th Int. Conf. on Machine Learning, p.759-766.

[36]Robinson, E.A., Treitel, S., 1967. Principles of digital Wiener filtering. Geophys. Prospect., 15(3):311-332.

[37]Roy, R., Kailath, T., 1989. ESPRIT-estimation of signal parameters via rotational invariance techniques. IEEE Trans. Acoust. Speech Signal Process., 37(7):984-995.

[38]Salakhutdinov, R., Hinton, G., 2009. Deep Boltzmann machines. J. Mach. Learn. Res., 5:448-455.

[39]Saon, G., Kuo, H.K.J., Rennie, S., et al., 2015. The IBM 2015 English conversational telephone speech recognition system. arXiv:1505.05899.

[40]Seide, F., Li, G., Yu, D., 2011. Conversational speech transcription using context-dependent deep neural networks. Int. Conf. on Interspeech, p.437-440.

[41]Soltau, H., Saon, G., Sainath, T.N., 2014. Joint training of convolutional and nonconvolutional neural networks. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, p.5572-5576.

[42]Song, T., Chen, J., Zhang, D.B., et al., 2016. A sound source localization algorithm using microphone array with rigid body. Int. Congress on Acoustics, p.1-8.

[43]Suzuki, L.R., 2015. Data as Infrastructure for Smart Cities. PhD Thesis, University College London, London, UK.

[44]Tadano, R., Pediredla, A., Veeraraghavan, A., 2015. Depth selective camera: a direct, on-chip, programmable technique for depth selectivity in photography. Int. Conf. on Computer Vision, p.3595-3603.

[45]Tokuda, K., Nankaku, Y., Toda, T., et al., 2013. Speech synthesis based on hidden Markov models. Proc. IEEE, 101(5):1234-1252.

[46]Turk, M., Pentland, A., 1991. Eigenfaces for recognition. J. Cogn. Neurosci., 3(1):71-86.

[47]Veselý, K., Ghoshal, A., Burget, L., et al., 2013. Sequence-discriminative training of deep neural networks. Int. Conf. on Interspeech, p.2345-2349.

[48]Wang, W., Xu, S., Xu, B., 2016. First step towards end-to-end parametric TTS synthesis: generating spectral parameters with neural attention. Int. Conf. on Interspeech, p.2243-2247.

[49]Xiong, W., Droppo, J., Huang, X., et al., 2016. Achieving human parity in conversational speech recognition. arXiv:1610.05256.

[50]Zhang, J.P., Wang, F.Y., Wang, K.F., et al., 2011. Data-driven intelligent transportation systems: a survey. IEEE Trans. Intell. Transp. Syst., 12(4):1624-1639.

[51]Zheng, L., Yang, Y., Hauptmann, A.G., 2016. Person re-identification: past, present and future. arXiv:1610. 02984.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE