Visual interpretability for deep learning: a survey

Quan-shi ZHANG , Song-chun ZHU

Front. Inform. Technol. Electron. Eng ›› 2018, Vol. 19 ›› Issue (1) : 27 -39.

PDF (2014KB)
Front. Inform. Technol. Electron. Eng ›› 2018, Vol. 19 ›› Issue (1) : 27 -39. DOI: 10.1631/FITEE.1700808
Review
Review

Visual interpretability for deep learning: a survey

Author information +
History +
PDF (2014KB)

Abstract

This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations. Although deep neural networks have exhibited superior performance in various tasks, interpretability is always Achilles’ heel of deep neural networks. At present, deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations. We believe that high model interpretability may help people break several bottlenecks of deep learning, e.g., learning from a few annotations, learning via human–computer communications at the semantic level, and semantically debugging network representations. We focus on convolutional neural networks (CNNs), and revisit the visualization of CNN representations, methods of diagnosing representations of pre-trained CNNs, approaches for disentangling pre-trained CNN representations, learning of CNNs with disentangled representations, and middle-to-end learning based on model interpretability. Finally, we discuss prospective trends in explainable artificial intelligence.

Keywords

Artificial intelligence / Deep learning / Interpretable model

Cite this article

Download citation ▾
Quan-shi ZHANG, Song-chun ZHU. Visual interpretability for deep learning: a survey. Front. Inform. Technol. Electron. Eng, 2018, 19(1): 27-39 DOI:10.1631/FITEE.1700808

登录浏览全文

4963

注册一个新账户 忘记密码

References

RIGHTS & PERMISSIONS

Zhejiang University and Springer-Verlag GmbH Germany, part of Springer Nature 2018

AI Summary AI Mindmap
PDF (2014KB)

Supplementary files

FITEE-0027-18005-QSZ_suppl_1

FITEE-0027-18005-QSZ_suppl_2

5365

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/