Full Text:   <2431>

Summary:  <1622>

CLC number: TP391

On-line Access: 2018-03-10

Received: 2017-12-02

Revision Accepted: 2018-01-25

Crosschecked: 2018-01-28

Cited: 0

Clicked: 7743

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Quan-shi Zhang

http://orcid.org/0000-0002-6108-2738

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2018 Vol.19 No.1 P.27-39

http://doi.org/10.1631/FITEE.1700808


Visual interpretability for deep learning: a survey


Author(s):  Quan-shi Zhang, Song-chun Zhu

Affiliation(s):  University of California, Los Angeles, California 90095, USA

Corresponding email(s):   zhangqs@ucla.edu, sczhu@stat.ucla.edu

Key Words:  Artificial intelligence, Deep learning, Interpretable model


Quan-shi Zhang, Song-chun Zhu. Visual interpretability for deep learning: a survey[J]. Frontiers of Information Technology & Electronic Engineering, 2018, 19(1): 27-39.

@article{title="Visual interpretability for deep learning: a survey",
author="Quan-shi Zhang, Song-chun Zhu",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="19",
number="1",
pages="27-39",
year="2018",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1700808"
}

%0 Journal Article
%T Visual interpretability for deep learning: a survey
%A Quan-shi Zhang
%A Song-chun Zhu
%J Frontiers of Information Technology & Electronic Engineering
%V 19
%N 1
%P 27-39
%@ 2095-9184
%D 2018
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1700808

TY - JOUR
T1 - Visual interpretability for deep learning: a survey
A1 - Quan-shi Zhang
A1 - Song-chun Zhu
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 19
IS - 1
SP - 27
EP - 39
%@ 2095-9184
Y1 - 2018
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1700808


Abstract: 
This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations. Although deep neural networks have exhibited superior performance in various tasks, interpretability is always Achilles’ heel of deep neural networks. At present, deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations. We believe that high model interpretability may help people break several bottlenecks of deep learning, e.g., learning from a few annotations, learning via human–computer communications at the semantic level, and semantically debugging network representations. We focus on convolutional neural networks (CNNs), and revisit the visualization of CNN representations, methods of diagnosing representations of pre-trained CNNs, approaches for disentangling pre-trained CNN representations, learning of CNNs with disentangled representations, and middle-to-end learning based on model interpretability. Finally, we discuss prospective trends in explainable artificial intelligence.

深度学习中的视觉可解释性

概要:总结了近年来在理解神经网络内部特征表达和训练一个具有中层表达可解释性的深度神经网络上的相关研究工作。虽然深度神经网络在众多人工智能任务中已有杰出表现,但神经网络中层表达的可解释性依然是该领域发展的重大瓶颈。目前,深度神经网络以低解释性的黑箱表达为代价,获取了强大的分类能力。我们认为提高神经网络中层特征表达的可解释性,可以帮助人们打破众多深度学习的发展瓶颈,比如,小数据训练,语义层面上的人机交互式训练,以及基于内在特征语义定向精准修复网络中层特征表达缺陷等难题。本文着眼于卷积神经网络,调研了:(1) 网络表达可视化方法;(2) 网络表达的诊断方法;(3) 自动解构解释卷积神经网络的方法;(4) 学习中层特征表达可解释的神经网络的方法;(5) 基于网络可解释性的中层对端的深度学习算法。最后,讨论了可解释性人工智能未来可能的发展趋势。

关键词:人工智能;深度学习;可解释性模型

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Aubry M, Russell BC, 2015. Understanding deep features with computer-generated imagery. IEEE Int Conf on Computer Vision, p.2875-2883.

[2]Aubry M, Maturana D, Efros A, et al., 2014. Seeing 3D chairs: exemplar part-based 2D–3D alignment using a large dataset of CAD models. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.3762-3769.

[3]Bau D, Zhou B, Khosla A, et al., 2017. Network dissection: quantifying interpretability of deep visual representations. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1063-6919.

[4]Chen X, Duan Y, Houthooft R, et al., 2016. Infogan: interpretable representation learning by information maximizing generative adversarial nets. NIPS, p.2172-2180.

[5]Dosovitskiy A, Brox T, 2016. Inverting visual representations with convolutional networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.4829-4837.

[6]Fong RC, Vedaldi A, 2017. Interpretable explanations of black boxes by meaningful perturbation. IEEE Int Conf on Computer Vision, p.3429-3437.

[7]Goyal Y, Mohapatra A, Parikh D, et al., 2016. Towards transparent AI systems: interpreting visual question answering models. https://arxiv.org/abs/1608.08974

[8]He K, Zhang X, Ren S, et al., 2016. Deep residual learning for image recognition. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.770-778.

[9]Hu Z, Ma X, Liu Z, et al., 2016. Harnessing deep neural networks with logic rules. http://arxiv.org/abs/1603.06318

[10]Huang G, Liu Z, Weinberger KQ, et al., 2017. Densely connected convolutional networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.4700-4708.

[11]Kindermans PJ, Sch#x00FC;tt KT, Alber M, et al., 2017. Learning how to explain neural networks: patternnet and patternattribution. http://arxiv.org/abs/1705.05598

[12]Koh P, Liang P, 2017. Understanding black-box predictions via influence functions. Proc 34th Int Conf on Machine Learning, p.1885-1894.

[13]Krizhevsky A, Sutskever I, Hinton GE, 2012. Imagenet classification with deep convolutional neural networks. NIPS, p.1097-1105.

[14]Kumar D, Wong A, Taylor GW, 2017. Explaining the unexplained: a class-enhanced attentive response (clear) approach to understanding deep neural networks. IEEE Conf on Computer Vision and Pattern Recognition Workshops, p.1686-1694.

[15]Lakkaraju H, Kamar E, Caruana R, et al., 2017. Identifying unknown unknowns in the open world: representations and policies for guided exploration. Proc 31st AAAI Conf on Artificial Intelligence, p.2124-2132.

[16]LeCun Y, Bottou L, Bengio Y, et al., 1998a. Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278-2324.

[17]LeCun Y, Cortes C, Burges CJ, 1998b. The MNIST Database of Handwritten Digits. http://yann.lecun.com/exdb/ mnist/ [Accessed on June, 2017]

[18]Liu Z, Luo P, Wang X, et al., 2015. Deep learning face attributes in the wild. IEEE Int Conf on Computer Vision, p.3730-3738.

[19]Lu Y, 2015. Unsupervised learning on neural network outputs (v9). http://arxiv.org/abs/1506.00990

[20]Mahendran A, Vedaldi A, 2015. Understanding deep image representations by inverting them. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.5188-5196.

[21]Netzer Y, Wang T, Coates A, et al., 2011. Reading digits in natural images with unsupervised feature learning. NIPS, p.1-9.

[22]Nguyen A, Clune J, Bengio Y, et al., 2017. Plug & play generative networks: conditional iterative generation of images in latent space. IEEE Conf on Computer Vision and Pattern Recognition, p.3510-3520.

[23]Olah C, Mordvintsev A, Schubert L, 2017. Feature visualization. Distill.

[24]Paysan P, Knothe R, Amberg B, et al., 2009. A 3D face model for pose and illumination invariant face recognition. 6th IEEE Int Conf on Advanced Video and Signal Based Surveillance, p.296-301.

[25]Ribeiro MT, Singh S, Guestrin C, 2016. “Why should I trust you?”explaining the predictions of any classifier. Proc 22nd ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining, p.1135-1144 .

[26]Sabour S, Frosst N, Hinton GE, 2017. Dynamic routing between capsules. NIPS, p.3859-3869.

[27]Selvaraju RR, Cogswell M, Das A, et al., 2017. Grad-CAM: visual explanations from deep networks via gradient-based localization. IEEE Int Conf on Computer Vision, p.618-626.

[28]Simonyan K, Vedaldi A, Zisserman A, 2013. Deep inside convolutional networks: visualising image classification models and saliency maps. http://arxiv.org/abs/1312.6034

[29]Springenberg JT, Dosovitskiy A, Brox T, et al., 2015. Striving for simplicity: the all convolutional net. Inte Conf on Learning Representations, p.1-14.

[30]Su J, Vargas DV, Kouichi S, 2017. One pixel attack for fooling deep neural networks. http://arxiv.org/abs/1710.08864

[31]Szegedy C, Zaremba W, Sutskever I, et al., 2014. Intriguing properties of neural networks. http://arxiv.org/abs/1312.6199

[32]Wang P, Wu Q, Shen C, et al., 2017. The VQA-machine: learning how to use existing vision algorithms to answer new questions. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1173-1182.

[33]Wu TF, Zhu SC, 2011. A numerical study of the bottom-up and top-down inference processes in And-Or graphs. Int J Comput Vis, 93(2):226-252.

[34]Wu TF, Xia GS, Zhu SC, 2007. Compositional boosting for computing hierarchical image structures. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.1-8.

[35]Wu TF, Li X, Song X, et al., 2017. Interpretable R-CNN. http://arxiv.org/abs/1711.05226

[36]Yang X, Wu TF, Zhu SC, 2009. Evaluating information contributions of bottom-up and top-down processes. IEEE 12th Int Conf on Computer Vision, p.1042-1049.

[37]Yosinski J, Clune J, Bengio Y, et al., 2014. How transferable are features in deep neural networks? NIPS, p.1173-1182.

[38]Zeiler MD, Fergus R, 2014. Visualizing and understanding convolutional networks. European Conf on Computer Vision, p.818-833.

[39]Zhang Q, Cao R, Wu YN, et al., 2016. Growing interpretable part graphs on convnets via multi-shot learning. Proc 30th AAAI Conf on Artificial Intelligence, p.2898-2906.

[40]Zhang Q, Cao R, Wu YN, et al., 2017a. Mining object parts from CNNs via active question-answering. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.346-355.

[41]Zhang Q, Cao R, Zhang S, et al., 2017b. Interactively transferring CNN patterns for part localization. http://arxiv.org/abs/1708.01783

[42]Zhang Q, Wang W, Zhu SC, 2018a. Examining CNN representations with respect to dataset bias. Proc 32nd AAAI Conf on Artificial Intelligence, in press.

[43]Zhang Q, Cao R, Shi F, et al., 2018b. Interpreting CNN knowledge via an explanatory graph. Proc 32nd AAAI Conf on Artificial Intelligence, p.2124-2132.

[44]Zhang Q, Yang Y, Wu YN, et al., 2018c. Interpreting CNNs via decision trees. http://arxiv.org/abs/1802.00121

[45]Zhang Q, Wu YN, Zhu SC, 2018d. Interpretable convolutional neural networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, in press.

[46]Zhou B, Khosla A, Lapedriza A, et al., 2015. Object detectors emerge in deep scene CNNs. http://arxiv.org/abs/1412.6856

[47]Zintgraf LM, Adel TSCT, Welling M, 2017. Visualizing deep neural network decisions: prediction difference analysis. http://arxiv.org/abs/1702.04595

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE