Full Text:   <3504>

Summary:  <793>

Suppl. Mater.: 

CLC number: 

On-line Access: 2021-12-23

Received: 2021-09-30

Revision Accepted: 2021-10-07

Crosschecked: 2021-11-22

Cited: 0

Clicked: 4238

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Yi Yang

https://orcid.org/0000-0002-0512-880X

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2021 Vol.22 No.12 P.1551-1558

http://doi.org/10.1631/FITEE.2100463


Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies


Author(s):  Yi Yang, Yueting Zhuang, Yunhe Pan

Affiliation(s):  College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China

Corresponding email(s):   yangyics@zju.edu.cn, yzhuang@zju.edu.cn, panyh@zju.edu.cn

Key Words: 


Share this article to: More |Next Article >>>

Yi Yang, Yueting Zhuang, Yunhe Pan. Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies[J]. Frontiers of Information Technology & Electronic Engineering, 2021, 22(12): 1551-1558.

@article{title="Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies",
author="Yi Yang, Yueting Zhuang, Yunhe Pan",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="22",
number="12",
pages="1551-1558",
year="2021",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2100463"
}

%0 Journal Article
%T Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies
%A Yi Yang
%A Yueting Zhuang
%A Yunhe Pan
%J Frontiers of Information Technology & Electronic Engineering
%V 22
%N 12
%P 1551-1558
%@ 2095-9184
%D 2021
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2100463

TY - JOUR
T1 - Multiple knowledge representation for big data artificial intelligence: framework, applications, and case studies
A1 - Yi Yang
A1 - Yueting Zhuang
A1 - Yunhe Pan
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 22
IS - 12
SP - 1551
EP - 1558
%@ 2095-9184
Y1 - 2021
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2100463


Abstract: 
In this paper, we present a multiple knowledge representation (MKR) framework and discuss its potential for developing big data artificial intelligence (AI) techniques with possible broader impacts across different AI areas. Typically, canonical knowledge representations and modern representations each emphasize a particular aspect of transforming inputs into symbolic encoding or vectors. For example, knowledge graphs focus on depicting semantic connections among concepts, whereas deep neural networks (DNNs) are more of a tool to perceive raw signal inputs. MKR is an advanced AI representation framework for more complete intelligent functions, such as raw signal perception, feature extraction and vectorization, knowledge symbolization, and logical reasoning. MKR has two benefits: (1) it makes the current AI techniques (dominated by deep learning) more explainable and generalizable, and (2) it expands current AI techniques by integrating MKR to facilitate the mutual benefits of the complementary capacity of each representation, e.g., raw signal perception and symbolic encoding. We expect that MKR research and its applications will drive the evolution of AI 2.0 and beyond.

大数据人工智能下的多重知识表达:框架、应用及案例研究

杨易,庄越挺,潘云鹤
浙江大学计算机学院,中国杭州市,310027
摘要:提出一种多重知识表示框架,探讨了其对推动大数据人工智能技术在各个领域中发展的重要意义及深远影响。传统知识表达和现代基于深度学习的知识表达通常着眼于利用特定变换方式,将输入转换为符号编码或者向量。例如,知识图谱关注于描述各个概念之间的语义联系,而深度神经网络更像是感知原始信号输入的工具。多重知识表达是一种更为先进的人工智能表征框架,具备更完整的智能功能,比如原始信号感知、特征提取及向量化、知识符号化和逻辑推断。多重知识表达有如下两点优势:(1)与现有以深度学习为主导的人工智能技术相比,具有更强的解释性以及更好的泛化能力;(2)将多重知识表达集成于现有人工智能技术,有利于各种表征(例如原始信号感知以及符号化编码)发挥互补优势。我们希望多重知识表达相关研究以及应用能够驱动新一代人工智能蓬勃发展。

关键词:多重知识表达;人工智能;大数据

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Amodei D, Olah C, Steinhardt J, et al., 2016. Concrete problems in AI safety. https://arxiv.org/abs/1606.06565v2

[2]Arrieta AB, díaz-Rodríguez N, Del Ser J, et al., 2020. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inform Fus, 58:82-115.

[3]Auer S, Bizer C, Kobilarov G, et al., 2007. DBpedia: a nucleus for a web of open data. Proc 6th Int Semantic Web Conf and 2nd Asian Semantic Web Conf the Semantic Web, p.722-735.

[4]de Souza CR, Gaidon A, Cabon Y, et al., 2017. Procedural generation of videos to train deep action recognition networks. IEEE Conf on Computer Vision and Pattern Recognition, p.2594-2604.

[5]Fan HH, Zhu LC, Yang Y, et al., 2020. Recurrent attention network with reinforced generator for visual dialog. ACM Trans Multim Comput Commun Appl, 16(3):78.

[6]Ferrada S, Bustos B, Hogan A, 2017. IMGpedia: a linked dataset with content-based analysis of Wikimedia images. Proc 16th Int Semantic Web Conf on the Semantic Web, p.84-93.

[7]França MVM, Zaverucha G, d’Avila Garcez AS, 2014. Fast relational learning using bottom clause propositionalization with artificial neural networks. Mach Learn, 94(1):81-104.

[8]Gogoglou A, Bruss CB, Hines KE, 2019. On the interpretability and evaluation of graph representation learning. https://arxiv.org/abs/1910.03081

[9]Goodfellow IJ, Pouget-Abadie J, Mirza M, et al., 2014. Generative adversarial nets. Proc 27th Int Conf on Neural Information Processing Systems, p.2672-2680.

[10]He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. IEEE Conf on Computer Vision and Pattern Recognition, p.770-778.

[11]He KM, Fan HQ, Wu YX, et al., 2020. Momentum contrast for unsupervised visual representation learning. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.9726-9735.

[12]Johnson J, Gupta A, Li FF, 2018. Image generation from scene graphs. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.1219-1228.

[13]Krizhevsky A, Sutskever I, Hinton GE, 2012. ImageNet classification with deep convolutional neural networks. Proc 25th Int Conf on Neural Information Processing Systems, p.1097-1105.

[14]Miao JX, Wu Y, Yang Y, 2021. Identifying visible parts via pose estimation for occluded person re-identification. IEEE Trans Neur Netw Learn Syst, early access.

[15]Pan YH, 2019. On visual knowledge. Front Inform Technol Electron Eng, 20(8):1021-1025.

[16]Pan YH, 2020a. Miniaturized five fundamental issues about visual knowledge. Front Inform Technol Electron Eng, early access.

[17]Pan YH, 2020b. Multiple knowledge representation of artificial intelligence. Engineering, 6(3):216-217.

[18]Pan YH, 2021. On visual understanding. Front Inform Technol Electron Eng, early access.

[19]Serafini L, d’Avila Garcez A, 2016. Logic tensor networks: deep learning and logical reasoning from data and knowledge. Proc 11th Int Workshop on Neural-Symbolic Learning and Reasoning Co-located with the Joint Multi-conf on Human-Level Artificial Intelligence.

[20]Simonyan K, Zisserman A, 2014. Two-stream convolutional networks for action recognition in videos. Proc 27th Int Conf on Neural Information Processing Systems, p.568-576.

[21]Singh J, Zheng L, 2020. Combining semantic guidance and deep reinforcement learning for generating human level paintings. https://arxiv.org/abs/2011.12589

[22]Sun YF, Cheng CM, Zhang YH, et al., 2020. Circle loss: a unified perspective of pair similarity optimization. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.6397-6406.

[23]Tang KH, Niu YL, Huang JQ, et al., 2020. Unbiased scene graph generation from biased training. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3716-3725.

[24]Vaswani A, Shazeer N, Parmar N, et al., 2017. Attention is all you need. Proc 31st Int Conf on Neural Information Processing Systems, p.6000-6010.

[25]Veeravasarapu V, Rothkopf C, Visvanathan R, 2017. Adversarially tuned scene generation. IEEE Conf on Computer Vision and Pattern Recognition, p.6441-6449.

[26]Vrandečić D, Krötzsch M, 2014. Wikidata: a free collaborative knowledge base. Commun ACM, 57(10):78-85.

[27]Wang XH, Zhu LC, Wu Y, et al., 2020. Symbiotic attention for egocentric action recognition with object-centric alignment. IEEE Trans Patt Anal Mach Intell, early access.

[28]Xu DF, Zhu YK, Choy CB, et al., 2017. Scene graph generation by iterative message passing. IEEE Conf on Computer Vision and Pattern Recognition, p.3097-3106.

[29]Yan Y, Nie FP, Li W, et al., 2016. Image classification by cross-media active learning with privileged information. IEEE Trans Multim, 18(12):2494-2502.

[30]Zhu LC, Fan HH, Luo YW, et al., 2021. Temporal cross-layer correlation mining for action recognition. IEEE Trans Multim, early access.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE