Press "Enter" to skip to content

深度神经网络可解释性方法汇总,附Tensorflow代码实现

理解神经网络:人们一直觉得深度学习可解释性较弱。然而,理解神经网络的研究一直也没有停止过,本文就来介绍几种神经网络的可解释性方法,并配有能够在Jupyter下运行的代码连接。

 

Activation Maximization

 

通过激活最化来解释深度神经网络的方法一共有两种,具体如下:

 

1.1 Activation Maximization (AM)

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/1.1%20Activation%20Maximization.ipynb

 

 

1.2 Performing AM in Code Space

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/1.3%20Performing%20AM%20in%20Code%20Space.ipynb

 

 

Layer-wise Relevance Propagation

 

层方向的关联传播,一共有5种可解释方法。Sensitivity Analysis、Simple Taylor Decomposition、Layer-wise Relevance Propagation、Deep Taylor Decomposition、DeepLIFT。它们的处理方法是:先通过敏感性分析引入关联分数的概念,利用简单的Taylor Decomposition探索基本的关联分解,进而建立各种分层的关联传播方法。具体如下:

 

2.1 Sensitivity Analysis

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.1%20Sensitivity%20Analysis.ipynb

 

 

2.2 Simple Taylor Decomposition

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.2%20Simple%20Taylor%20Decomposition.ipynb

 

 

2.3 Layer-wise Relevance Propagation

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.3%20Layer-wise%20Relevance%20Propagation%20%281%29.ipynb

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.3%20Layer-wise%20Relevance%20Propagation%20%282%29.ipynb

 

 

2.4 Deep Taylor Decomposition

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Deep%20Taylor%20Decomposition%20%281%29.ipynb

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.4%20Deep%20Taylor%20Decomposition%20%282%29.ipynb

 

 

2.5 DeepLIFT

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/2.5%20DeepLIFT.ipynb

 

 

Gradient Based Methods

 

基于梯度的方法有:反卷积、反向传播, 引导反向传播,积分梯度和平滑梯度这几种。具体可以参考如下链接:

 

https://github.com/1202kbs/Understanding-NN/blob/master/models/grad.py

 

详细信息如下:

 

3.1 Deconvolution

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.1%20Deconvolution.ipynb

 

 

3.2 Backpropagation

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.2%20Backpropagation.ipynb

 

 

3.3 Guided Backpropagation

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.3%20Guided%20Backpropagation.ipynb

 

 

3.4 Integrated Gradients

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.4%20Integrated%20Gradients.ipynb

 

 

3.5 SmoothGrad

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/3.5%20SmoothGrad.ipynb

 

 

Class Activation Map

 

类激活映射的方法有3种,分别为:Class Activation Map、Grad-CAM、 Grad-CAM++。在MNIST上的代码可以参考:

 

https://github.com/deepmind/mnist-cluttered

 

每种方法的详细信息如下:

 

4.1 Class Activation Map

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/4.1%20CAM.ipynb

 

 

4.2 Grad-CAM

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/4.2%20Grad-CAM.ipynb

 

 

4.3 Grad-CAM++

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/4.3%20Grad-CAM-PP.ipynb

 

 

Quantifying Explanation Quality

 

虽然每一种解释技术都基于其自身的直觉或数学原理,但在更抽象的层次上确定好解释的特征并能够定量地测试这些特征也很重要。这里再推荐两种基于质量和评价的可解释性方法。具体如下:

 

5.1 Explanation Continuity

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/5.1%20Explanation%20Continuity.ipynb

 

 

5.2 Explanation Selectivity

 

相关代码如下:

 

http://nbviewer.jupyter.org/github/1202kbs/Understanding-NN/blob/master/5.2%20Explanation%20Selectivity.ipynb

 

Be First to Comment

发表评论

电子邮件地址不会被公开。 必填项已用*标注