CLC number:
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2022-04-22
Cited: 0
Clicked: 4951
Yun-he Pan. On visual knowledge[J]. Frontiers of Information Technology & Electronic Engineering, 2019, 20(8): 1021-1025.
@article{title="On visual knowledge",
author="Yun-he Pan",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="20",
number="8",
pages="1021-1025",
year="2019",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1910001"
}
%0 Journal Article
%T On visual knowledge
%A Yun-he Pan
%J Frontiers of Information Technology & Electronic Engineering
%V 20
%N 8
%P 1021-1025
%@ 2095-9184
%D 2019
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1910001
TY - JOUR
T1 - On visual knowledge
A1 - Yun-he Pan
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 20
IS - 8
SP - 1021
EP - 1025
%@ 2095-9184
Y1 - 2019
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1910001
Abstract: This paper presents the concept of “visual knowledge.” Visual knowledge is a new form of knowledge representation that is different from all other visual representations or knowledge represen-tations that have emerged in artificial intelligence (AI) development. A visual concept is composed of prototypes, category structures, hierarchical struc-tures, action structures, etc. It can further constitute a visual proposition, incorporating scene structures and their dynamics, and the visual proposition can then be used to narrate a visual scene. This paper suggests that careful utilization of developments from computer graphics technology will contribute to realizing visual knowledge representation, and to its reasoning and analysis, and that careful utilization of progression from computer vision will promote the learning of visual knowledge. Representation, reasoning, learn-ing, and utilization of visual knowledge will form a key step toward remarkable breakthroughs in the era of AI 2.0.
[1]Anderson JR, 1989. Cognitive Psychology (Yang Q, Trans.). Jilin Education Press, China.
[2]Greff K, Kaufmann RL, Kabra R, et al., 2019. Multi-object representation learning with iterative variational inference. https://arxiv.org/abs/1903.00450
[3]Horoufchin H, Bzdok D, Buccino G, et al., 2018. Action and object words are differentially anchored in the sensory motor system—a perspective on cognitive embodiment. Sci Reports, 8:6583.
[4]Hutchinson B, Mitchell M, 2019. 50 years of test (un)fairness: lessons for machine learning. Proc Conf on Fairness, Accountability, and Transparency, p.49-58.
[5]Kosiorek AR, Sabour S, Teh YW, et al., 2019. Stacked capsule autoencoders. https://arxiv.org/abs/1906.06818
[6]Ma SD, Zhang ZY, 1998. Computer Vision: Fundamentals of the Computational Theory and Algorithms. China Science Publishing & Media Ltd., Beijing, China (in Chinese).
[7]Pan YH, 1991. A study on the imagery information model of imagery thinking. Patt Recogn Artif Intell, 4(4):7-12 (in Chinese).
[8]Pan YH, 1996. A study on integrated reasoning. Patt Recogn Artif Intell, 9(3):201-208 (in Chinese).
[9]Pan YH, 2016. Heading toward artificial intelligence 2.0. Engineering, 2(4):409-413.
[10]Pan YH, Tong RF, Tang M, 2011. Computer Graphics: Principles, Methods and Applications (3rd). Higher Education Press, Beijing, China (in Chinese).
[11]Zhao Y, Birdal T, Deng H, et al., 2019. 3D point capsule networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1009-1018.
Open peer comments: Debate/Discuss/Question/Opinion
<1>