深度神经网络可解释性方法汇总,附Tensorflow代码实现

更新时间:2023-06-09 11:21:47 阅读: 评论:0

深度神经⽹络可解释性⽅法汇总,附Tensorflow代码实现
新智元推荐
编辑:元⼦
【新智元导读】理解神经⽹络:⼈们⼀直觉得深度学习可解释性较弱。然⽽,理解神经⽹络的研究⼀直也没有停⽌过,本⽂就来介绍⼏种神经⽹络的可解释性⽅法,并配有能够在Jupyter下运⾏的代码连接。来新智元 AI 朋友圈和AI⼤咖们⼀起讨论吧。
同治皇帝
理解神经⽹络:⼈们⼀直觉得深度学习可解释性较弱。然⽽,理解神经⽹络的研究⼀直也没有停⽌过,本⽂就来介绍⼏种神经⽹络的可解释性⽅法,并配有能够在Jupyter下运⾏的代码连接。
Activation Maximization
通过激活最化来解释深度神经⽹络的⽅法⼀共有两种,具体如下:
1.1 Activation Maximization (AM)周志道
相关代码如下:
1.2 Performing AM in Code Space
相关代码如下:
Layer-wi Relevance Propagation
想一想填一填
层⽅向的关联传播,⼀共有5种可解释⽅法。Sensitivity Analysis、Simple Taylor Decomposition、Layer-wi Relevance Propagation、Deep Taylor Decomposition、DeepLIFT。它们的处理⽅法是:先通过敏感性分析引⼊关联分数的概念,利⽤简单的Taylor Decomposition探索基本的关联分解,进⽽建⽴各种分层的关联传播⽅法。具体如下:2.1 Sensitivity Analysis
相关代码如下:
2.2 Simple Taylor Decomposition
电视剧幸福在路上相关代码如下:
珠海市卫生学校2.3 Layer-wi Relevance Propagation
相关代码如下:
2.4 Deep Taylor Decomposition
相关代码如下:
2.5 DeepLIFT
相关代码如下:
Gradient Bad Methods
基于梯度的⽅法有:反卷积、反向传播,引导反向传播,积分梯度和平滑梯度这⼏种。具体可以参考如下链接:
详细信息如下:
3.1 Deconvolution
相关代码如下:
3.2 Backpropagation
相关代码如下:
3.3 Guided Backpropagation净资产收益率多少合适>性取向测试
相关代码如下:
3.4 Integrated Gradients
相关代码如下:
3.5 SmoothGrad
相关代码如下:
Class Activation Map
类激活映射的⽅法有3种,分别为:Class Activation Map、Grad-CAM、 Grad-CAM++。在MNIST上的代码可以参考:每种⽅法的详细信息如下:
4.1 Class Activation Map
相关代码如下:
4.2 Grad-CAM
相关代码如下:
4.3 Grad-CAM++
相关代码如下:
Quantifying Explanation Quality
虽然每⼀种解释技术都基于其⾃⾝的直觉或数学原理,但在更抽象的层次上确定好解释的特征并能够定量地测试这些特征也很重要。这⾥再推荐两种基于质量和评价的可解释性⽅法。具体如下:
5.1 Explanation Continuity
相关代码如下:
5.2 Explanation Selectivity
相关代码如下:
参考⽂献
Sections 1.1 ~ 2.2 and 5.1 ~ 5.2
[1] Montavon, G., Samek, W., Müller, K., jun 2017. Methods for Interpreting and Understanding Deep Neural Networks. arXiv preprint arXiv:1706.07979, 2017.
Section 1.3
[2] Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., Clune, J., 2016. Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In: Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain. pp. 3387-3395.
[3] A. Dosovitskiy and T. Brox. Generating images with perceptual similarity metrics bad on deep networks. In NIPS, 2016.
Section 2.3
[4] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W., 07 2015. On pixel-wi explanations for non-linear classi er decisions by layer-wi relevance propagation. PLOS ONE 10 (7), 1-46.
Section 2.4
[5] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.R., 2017. Explaining nonlinear classi cation decisions with deep Taylor decomposition. Pattern Recognition 65, 211-222.
Section 2.5
[6] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning Important Features Through Propagating Activation Differences. arXiv preprint arXiv:1704.02685, 2017.
Section 3.1
[7] Zeiler, M. D., Fergus, R., 2014. Visualizing and understanding convolutional networks. In: Computer Vision - ECCV 2014 - 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I. pp. 818-833. Section 3.2
[8] K. Simonyan, A. Vedaldi, and A. Zisrman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In Workshop at International Conference on Learning Reprentations, 2014.
Section 3.3
[9] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806, 2014.
Section 3.4
[10] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. arXiv preprint
arXiv:1703.01365, 2017.
Section 3.5
[11] Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, and Martin Wattenberg. SmoothGrad: removing noi by adding noi. arXiv preprint arXiv:1706.03825, 2017.色彩联想
Section 4.1
[12] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929, 2016.
Section 4.2
[13] R. R.Selvaraju, A. Das, R. Vedantam, M. Cogswell, D. Parikh, and D. Batra. Grad-cam: Why did you say that? visual explanations from deep networks via gradient-bad localization. arXiv:1611.01646, 2016.
Section 4.3
[14] A. Chattopadhyay, A. Sarkar, P. Howlader, and V. N. Balasubramanian. Grad-cam++: Generalized gradient-bad visual explanations for deep convolutional networks. CoRR, abs/1710.11063, 2017.

本文发布于:2023-06-09 11:21:47,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/89/1030537.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:代码   神经   关联   传播   相关   解释性
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图